The existence of high speed, inexpensive computing has made it easy to look at data in ways that were once impossible. Where once a data analyst was forced to make restrictive assumptions before beginning, the power of the computer now allows great freedom in deciding where an analysis should go. One area that has benefited greatly from this new freedom is that of non parametric density, distribution, and regression function estimation, or what are generally called smoothing methods. Most people are familiar with some smoothing methods (such as the histogram) but are unlikely to know about more recent developments that could be useful to them. If a group of experts on statistical smoothing methods are put in a room, two things are likely to happen. First, they will agree that data analysts seriously underappreciate smoothing methods. Smoothing meth ods use computing power to give analysts the ability to highlight unusual structure very effectively, by taking advantage of people's abilities to draw conclusions from well-designed graphics. Data analysts should take advan tage of this, they will argue.