Making computers easier to understand has been one of Clayton Lewis' long-running interests.
During the formative years of computers, he says, mathematicians and computer scientists didn't know how to make the technology as easy as possible to understand. That's partly what motivated him. After earning a master's degree from MIT, Lewis was hired by IBM. There he worked with a pioneering psychologist, John Gould, and was encouraged to earn a degree in psychology in order to tackle the problem.
While taking Ph.D. courses, he became a visiting instructor at the University of Texas. He found he was committed to teaching but worried that in a research university setting he wouldn't be able to focus on teaching and expect to be successful. He returned to the IBM Watson Research Center where he worked on problems related to human-computer interaction.
Still caring deeply about teaching, Lewis decided to once again give academia a try. In 1984, he was recruited to the University of Colorado where he is a professor of computer science and a fellow at the Institute of Cognitive Science. For the past five years, he has had the role of Scientist in Residence at the Coleman Institute for Cognitive Disabilities, which promotes the development of technology for people with cognitive disabilities.
"This has been a great setting for me all the way around," he says. "Thankfully, I was able to work out a career in which I could devote considerable attention to teaching. Happily, I've received a lot of encouragement from the university."
In 1989, he was named to the charter class of the President's Teaching Scholars Program, which he describes as a wonderful community that has developed a variety of initiatives to help faculty with learning and teaching.
— Cynthia Pasquale
1. How did you choose this career?
My life was influenced by Martin Gardner, who was trained as a journalist, and became a leading amateur mathematician and leading amateur magician. He wrote many books, but the thing that touched me and many other people was his column, Mathematical Games and Recreation, published in Scientific American. He had an amazing ability to learn about some interesting topic in mathematics then write a column about it and always included something you could do actually do.
In the '50s and '60s, before computers were much of a factor, one of his columns showed how you could make a computer that would actually learn how to play a simple game. Through this, I began forming an interest in computers. Another thing was that I was given gift money to buy a Geniac. It was not a computer by today's standards, but with the Masonite, fasteners and wire, you could make a device that would set up complicated logic circuits. The creator included in the box a reprint of a paper by Claude Shannon, known as the creator of information theory. The paper discussed the logic of switches and really explained the basis for everything you could do with Geniac.
Also, someone brought a computer to my high school for a summer course and I took that course. So after high school, my ambition was to work in this field. In college, however, I was too early to study computer science. There were few programs available, so I was a mathematics major as an undergraduate.
2. Your research includes human-computer interaction. What does your work entail and what do you hope to accomplish?
My top-level goal over all is to make computer systems possible for people to understand. There are many benefits of that, including productivity. One of my colleagues has written about the fact that a computer's impact on productivity often is negative, so making improvements is one of my goals. I've been particularly interested in making programming more accessible to people so they can make the computer do what they want it to do. Most computer use isn't like that. Someone else has created something that you are going to use and you don't have anything to say about what it's like. So I'm interested in opening up the computer as a tool.
My most impactful contributions enable people to evaluate designs so creators can identify strengths and weakness so the designs can be improved. There's no substitute for coming up with ideas, evaluating them and testing the designs with real people. People are so complicated that you can't anticipate how it will strike them.
I've contributed to a couple of methods of evaluation that are in everyday use, especially in software development. One moved over from the experimental psychology lab – the "thinking aloud" method. You give someone something to do, in this case with a computer program in prototype form, and ask them to do an action and tell you what they are thinking. This exposes a lot of problems that are difficult to discern otherwise. For example, labels aren't always interpreted the way the designer thought they would be. It's very hard to tell that from just watching someone use something, but if they narrate what they are doing, these clashes become very evident.
A colleague, Peter Polson, and I and students also developed the Cognitive Walkthrough method. This technique looks at a design step-by-step to determine whether an interface adequately cues the actions people are going to take.
Of late, I've put less work into this line of things. Compared to where we were when I started this line of work way back when, most of the problems have very substantially been solved. Now I'm interested in making programming more accessible. The field tolerates programming languages that are barely good enough. Beginners are subjected to all kinds of frustrations because the field just can't be bothered to deal with it. I harbor the hope that we can do a better job and hope to come up with programming approaches that would make things radically easier.
I'm also involved in an international project where institutes around the world have partnered to reshape what you have to do to create Web applications. We have a long way to go, but the foundational work is being done in how software is structured to open the way for easier ways to create applications.
3. Computer innovations are increasing so rapidly, they seem to be leaving the normal user behind. Do you think that's true?
It's hard for us to accurately judge the trend. What you have to realize is that we're aware of the things we don't know how to do, but what's easy to lose sight of is how much more we're able to do now without being aware of it. If we focus on what we're able to do, there's been a huge increase in the number of things we can do. You don't have to go back very far to a time when there wasn't a service like Amazon to order books online. For instance, it used to be that if you needed to change a shipping address, you had to put the information in in a specific way, in a form the computer would accept. Now you can add information in a way that makes sense to you and the system will accept that. People writing these programs understand it's much better to have the computer expend extra work rather than make users do it.
4. You worked for the Watson Research Center at IBM. Did you have any connection to Watson, the famous Jeopardy computer champ?
During my first job with IBM out of grad school, one of the things going on there was early work on getting computers to understand the natural language process. I made a vow that I was not going to work on natural language processes again because I was convinced that the problems were too difficult to be addressed. I wasn't wrong in that, but I was wrong in not taking a broader view. It's still true up to now that the techniques we were trying to use back then have not worked to this day. What I didn't anticipate is there are completely different approaches that do work, as exemplified by Watson.
The Watson system scavenges its knowledge to answer questions. It scavenged all that from the Web. I joke with one of my colleagues about my vow, and he finds it very amusing.
5. What do you consider to be one of your proudest achievements?
One of my greatest sources of satisfaction is that I've done some work that people all over the world find useful. At a conference, one of the people sitting at the table said, "Boy, there's a great technique that I think we should be using for this problem. It's called the Cognitive Walkthrough. There's this paper by this guy, Clayton Lewis ..."
Of course, he had no ideas who I was, but other people did know and they were chuckling about it. What could be better than to have somebody with no idea that you're in the audience saying positive things about how useful something is that you did. I'm very fortunate that I have been able to do some things that people find useful, so that's enormously satisfying.
I hope the work I'm doing now with the Global Public Inclusive Infrastructure, which hopes to reshape the Internet to make it more useful for people with disabilities, will have the same effect.
Teaching is important to me. I recently got a note from a former student who was struggling and left CU. He wrote that he had finished his bachelor's and that he appreciated what I had told him. He felt that what I said was useful in allowing him to go on and be successful. I'm grateful to be able to work with so many students and make contributions to their lives.
Want to suggest a faculty or staff member for Five Questions? Please e-mail Jay.Dedrick@cu.edu