Vague concepts are intrinsic to human communication. Somehow it would seems that vagueness is central to the flexibility and robustness of natural l- guage descriptions. If we were to insist on precise concept definitions then we would be able to assert very little with any degree of confidence. In many cases our perceptions simply do not provide sufficient information to allow us to verify that a set of formal conditions are met. Our decision to describe an individual as 'tall' is not generally based on any kind of accurate measurement of their height. Indeed it is part of the power of human concepts that they do not require us to make such fine judgements. They are robust to the imprecision of our perceptions, while still allowing us to convey useful, and sometimes vital, information. The study of vagueness in Artificial Intelligence (AI) is therefore motivated by the desire to incorporate this robustness and flexibility into int- ligent computer systems. This goal, however, requires a formal model of vague concepts that will allow us to quantify and manipulate the uncertainty resulting from their use as a means of passing information between autonomous agents. I first became interested in these issues while working with Jim Baldwin to develop a theory of the probability of fuzzy events based on mass assi- ments.