Chuck Thorpe

Chuck Thorpe

Chuck Thorpe is a Principal Research Scientist and runs the Navlab group in the Robotics Institute of CMU. His interests are in computer vision, planning, and architectures for outdoor robot vehicles. Since 1984, the Navlab group has built a series of 10 robot cars, HMMWVs, minivans, and full-sized passenger busses. The research is funded by DARPA for building off-road scout vehicles, and by the Department of Transportation for traffic safety and automated highways.
Click here to download a pdf copy of this transcript
Transcript Text: 

[recording begins abruptly]

Chuck Thorpe: -- and so tell me, you are familiar with Newell and Simon still active and McCarthy and Minsky [ph?] coming by to visit, so when I was a young graduate student, the pioneers of computer science were fading from the scene. Eckert and Mockley [ph?] were still alive, but were on their way out. _______ had been gone for a number of years. Pioneers of AI were still teaching courses every day and I got to take courses from a couple of them and the pioneers of robotics were still inventing the field.

Q: Yeah, I got to email Simon a few times. Worked on the encyclopedia kind of science family too.

Chuck Thorpe: Simon was this towering intellectual figure that you didn’t want to disagree with. I was once at a computer science faculty gathering and started to talk about Joshua Chamberlain who was the hero of the Battle of Gettysburg. He held a little round top and was a professor. By the time he retired, he was president of Bowdoin College and he had taught every course in the curriculum except for math and Herb looked at me and said, “So what’s wrong with math?” Herb was that way. He was a founder of the business school, a founder of the computer science school, a founder of cognitive psychology, lectured in philosophy department, lectured widely across the entire university. He was just that kind of a polymath. Interested in everything. Newell wasn’t quite as broad as Simon, but he was still very broad and respected across the university and he was a football player with a bad back, so we’d be having a faculty meeting like this and he’d be standing up against the wall with his bad back. You’d hear this voice booming down from above you and it was both physical and intellectual giant telling you what you should do and we all sat there and looked up to him again physically and intellectually and did what he told us to do.

Q: There has been a lot of research about how people respond to voices that come from higher above.

Chuck Thorpe: A voice from above, yes. Yes. Well, where should we start?

Q: Well, why don’t you start by telling us where you were born and where you grew up? Do we want to do-- we can do that again.

Chuck Thorpe: We can do the crucial papers later.

Q: Okay. Where you were born, where you grew up, and how you got into school or robotics or both.

Chuck Thorpe: I was born in Michigan, but I sort of grew up lots of different places. My father’s a surgeon. In the different parts of his surgery training we lived in Michigan, Illinois, Texas, Dover, Delaware, eventually Brussels, Belgium where he studied tropical medicine, and then my father spent 30 years in the Congo which became Zaire, which became the Congo where he ran a hospital. So I spent eight years of my life out there, seven years as a kid and a year out there teaching high school. I went to school as an undergrad expecting to be a doctor like my dad. Went to North Park College in Chicago and I got tired of memorizing holes in skulls and discovered computers and thought that those would be kind of exciting, but didn’t quite know what to do with it. I was backpacking the summer between my junior and senior year with my chemistry professor and my psychology professor and my psychology professor said, “So what are you interested in studying?” I said, “I think artificial intelligence.” “Oh,” he said. “You ought to go to Carnegie Mellon. They have this guy named Herb Simon who just won the Nobel Prize.” I thought cool. I better go look up Carnegie Mellon and find out about it. Of course that was before the internet, so you have to go down to the library and look up Carnegie Mellon. I found out about Simon and Newell and all of that cool stuff. Applied for graduate school. Was quite fortunate to be accepted. Showed up in August of ’79. At Carnegie, when you’re accepted into computer science, you’re accepted into the entire department. Not to a particular advisor and we had the immigration course where each of the professors stands up and says, “I’m so and so. I work on this and I’m looking for a couple of graduate students to join me.” It’s a good way to get the breadth of the entire school pretty quickly and a good way to start the match process. Raj Reddy [ph?] stood up and said, “I’m thinking of starting a robotics institute.” I said cool and I signed up to go to work for him. Raj, it turns out, was even smarter than I thought. He had decided to start the robotics institute, but was smart enough to convince the president of the university, Dick Siert [ph?], that it was Siert’s idea, not Raj’s idea to start it, and he, Raj, would let his arm be twisted into running it if Siert would go out and raise all the funds for it. So Siert thought this was great and went out and talked Westinghouse into funding our first industrial contract and Raj got the Office of Naval Research and Robo Choko [ph?] convinced to fund the first government contract. So those two contracts hit about the time I became a graduate student. Now if you wander around here and talk, Lee Weiss came in in electrical engineering. Jim Crowley who’s now in Grenoble came in electrical engineering. Each of them will claim to be the first robotics graduate student. I’ll claim to be the first one who went to work for Raj and joined the robotics institute. So the three of us all became robotics-- graduate students funded by the robotics institute in 1979 and pursued our careers from there on out. When we started the robotics institute, the entire institute could meet in Raj’s office because it was just Raj and a couple of graduate students, but it expanded pretty rapidly. Jerry Agan [ph?] came in to do industrial vision. He had been at SRI doing the world’s first commercial computer vision system which may be the world’s first commercial artificial intelligence system. The very simple binary vision system that had some pretty, in retrospect, obvious applications. Black chocolates coming out of the oven on a white conveyor belt, a nice binary vision system that could look down there and tell a robot arm where the chocolates were to pick them up and stack them so you didn’t have “I Love Lucy” women trying to load chocolates into their trays. Jerry came here early on. We attracted people like Art Sanderson who was an electrical engineering professor to be part of the robotics institute and one of the early hires was Hans Morevec [ph?] and Raj assigned Hans to help me, so Raj and Hans were the co-advisors for my thesis. Mark Rayberg [ph?] came in pretty quickly. Brought him in from Cal Tech to build hopping machines here. After a couple of years, Takeo Kanati [ph?] who had been here as a visitor was brought back then as faculty and Matt Mason came over from MIT. So Raj, Hans, Takeo, and Matt were my thesis committee and I learned a lot from those guys. If you look at the directors of the robotics institute to date it’s sort of Chuck Thorpe and his thesis committee because it was Raj and Takeo and then me and then Matt. It was fun to work with those guys. I’ve been working with those guys now for 30 years and I continue to benefit from them. Brief diversion away from robotics, Raj was my advisor. One day he called me up out of the blue. In fact, I was on a trip in China, and said, “How would you like to go to Qatar?” and I said, “You mean like tomorrow?” “Oh no, no.” He said, “You got lots of time.” I said, “Well, let’s talk about it when I get back to Pittsburgh.” Raj had become an advisor to the state of Qatar on how to wire the entire country. They liked his advice so much that they said, “We’re thinking of setting up some universities. Do you know anyone who’d like to come over here and teach computer science?” He said, “Sure. Carnegie Mellon would, but computer science is too small. Why don’t we do two majors? How about computer science and business?” They said, “Fine,” and so tapped me to go and start the Carnegie Mellon branch campus in Qatar which is what I did 2004 to 2010. I happened to be in Qatar and Raj happened to be in Qatar on a visit when we heard they had won the Von Noiman [ph?] prize. So I threw a little reception for him and invited the people high up in the Qatari government who he advises and in my introductory remarks I said, “You know, when I was a graduate student, Raj gave me invaluable advice, but he would walk into my office about once a month and say, ‘What you should do is,’ and he’d have a whole other thesis for me. I had to say, ‘Well, thank you, Raj,’ and the trick to dealing with Raj is to know which of his advice to pay attention to and which to file for later use.” As dean in Qatar, Raj would call me up about once a month and say, “What you should do is,” and I would say, “Thank you Raj,” because it was invaluable advice, but I already had a campus to run and half the time he had ideas for other campuses I should go run. I’ve caught on that he does the same thing to entire countries. He calls up the sheikha of Qatar and says, “What you should do is,” and all my Qatari friends are sitting there nodding and smiling. Raj, at the age of 70-something, still has so much energy and so many projects for graduate students, for campuses, for entire countries, it’s fun just to watch him and follow him around and I think he’ll never retire and even if he does, we have a lifetime worth of graduate theses and campuses and countries to go work on. So it’s always fun working with Raj.

Q: So you came to study computer science AI and then--

Chuck Thorpe: Came to study computer science AI. As part of that, Raj formed the robotics institute. At the time it was an interdisciplinary research institute. So you would still do your degrees in individual places, either in electrical engineering or in computer science, or oddly enough in the business school. Our business school here is way over on the extreme of being quantitative. If you think about business schools in general, there’s the Harvard model where you read case studies and there’s the Carnegie model where you do simulations and analyses and operations research and so forth. That’s the heritage of the founding of our business school. Our business school was founded by one of the Mellons who was a general in World War II and was so frustrated with the difficulty of doing logistics that he said logistics should be a science, not an art, and he formed the graduate school of industrial administration, which for many years didn’t even give an MBA. They gave a master’s of science in industrial administration and one of the founding professors was a guy named Herb Simon. So our business school does factory layout and logistics and so forth and some of the early graduate students who did their Ph.D.s funded by the robotics institute lived in what is now the Tepper School and got their Ph.D.s in business. So I stayed in computer science and did my Ph.D. out of that. Jim Crowley, Lee Weiss did theirs out of electrical. Steve-- I’ll think of his name. He’s now a dean in Singapore-- did his out of the business school. Mark Fox did his out of computer science, but with a joint appointment in the business school and so forth and so on.

Q: What was the first project you worked on as a grad student?

Chuck Thorpe: He sent me to work with Hans Morevec working on mobile robots. Hans had built a mobile robot for his Ph.D. thesis at Stanford that would take a series of pictures, use stereo vision to see where obstacles were, plan a path, move, and it moved so slowly. It moved about one meter every 15 minutes. It was such a slow motion that when he ran it outdoors in the 15 minutes between taking pictures, the sun would move and the shadows would move and in fact his system locked onto these nice sharp-edged shadows and saw that they were moving and saw that the real objects weren’t moving consistently with the shadows, decided it had more confidence in the shadows than in the real objects and threw out the real objects and locked onto the shadows and ran over his chairs, etc., that he had outdoors. It was a great success. Nobody else was moving at all using stereo vision, but it was very slow. So I said, “You know, we ought to be able to improve the performance of that and improving the performance of that will undoubtedly show some interesting scientific ways that will be worth a thesis.” And so that’s what I did. Instead of 15 minutes, it was more like 30 seconds between moves and it was more reliable and it had some more interesting path planning and so forth. So that was my thesis. Hans was the director of that lab. Alberto Alfez [ph?] and Larry Matthews were both in that lab and they’re now off at JPL doing great things and I stay in touch with those guys.

Q: Was this funded by either of those two initial--

Chuck Thorpe: This was funded by the Office of Naval Research. One of the shocks of my graduate student career was sitting there as a grungy graduate student wearing my jeans and t-shirt, hacking in the lab, and hearing a rustling at the door and looking up and finding Raj, the director of the institute, and Dick Siert, the president of the university, fighting over who got to hold the door open and in walked this admiral in his dress whites with his rows of ribbons on it. Nobody had told us we had a sponsor visit coming and we have empty pizza boxes and half-drunk cans of Coke and wires hanging out all over the place and didn’t necessarily have a demo ready to show and in came these guys. That could’ve been Robo Choko who was our first founder, but it could’ve also been the second funder from ONR was a guy named Brad Mooney and I’ve run into Admiral Mooney several times since then. He remembers visiting our lab, so it could very well have been him. Mooney was-- well, he had run the deep submergence rescue vehicle, DSRV, the one which, among other things, picked up the missing H-bomb that the US jettisoned off the coast of Spain. Mooney, as the head of the Office of Naval Research, was the publisher for the Naval Institute Press. Naval Institute Press wrote these dry tomes on sediment analysis off of the coast of Japan. If they were lucky sold 1000 copies. They thought they should maybe get into fiction publishing. One day they had an insurance salesman walk in with an idea of a book about underwater submarine warfare. They thought cool. Then they read the book and they thought, “Wait a minute, this is too good to have been written by an insurance salesman. He must’ve had an inside source. Let’s make up some questions and see if we can catch him off guard next time he comes in and we’ll figure out who really wrote the book,” and he came in and they said, “So Mr. Clancy, about this ‘Red October,’ have you ever considered publishing a book about say deep submergence rescue?” He said, “Oh yeah, that’d be cool. You could make this happen and you could do this and you could do this. Say, Admiral Mooney, didn’t you used to operate the deep submergence rescue vehicle?” They decided Tom Clancy did know all of this kind of stuff and they went ahead and published “The Hunt for Red October,” which outsold all other volumes that they ever published combined and was so popular he went off and found a more mainstream publisher for the rest of his work. So that’s Admiral Mooney and that’s one of our early funders in the robotics institute. Westinghouse, which was our industrial funder, they funded some of the early work here and I got to go on the first robotics plant trip. Westinghouse was interested in a bunch of things. Could we manufacture better light bulbs for them? Could we do quality testing? The first plant trip was to a plant that made turbine blades. This is one of the periodic downturns in the US electrical generating industry, so instead of selling new steam turbines for electrical generation, they were refurbishing steam turbines and someone would take down a steam turbine for inspection and would find one cracked blade and they would need one blade done now and they were losing a million dollars a day while the steam turbine was out of commission. A steam turbine blade is a long thing. Some of them are seven feet long with a complicated twisted, tapered airfoil shape and a big root and they’re traditionally made in a-- they’re stamped in a forge which takes days to set up and then machine to get all of the fine shape and cleaned, etc., and if you’re building a thousand blades, you don’t care if it takes you a day and a half to set up the stamp system, but if you want to build one, well, you’ve got all kinds of interesting problems. You’ve got could you build them in a different process that doesn’t require setting up this stamp? How do you rerun your schedule for your job shop to get one blade through quickly without distressing the rest of it? Are there better ways to do the fine machining and the finishing so that you can do a better job of the inspection so you don’t have as many blades fail? This triggered a whole batch of different projects, some of which live on today. Some of the job shop scheduling live on in things that Steve Smith is up to. The very first robotics plant trip, Jerry Agan, Raj, me, Mark Fox, a couple others, Paul Wright who’s now off at Stanford, I think. We all went and piled into a Westinghouse turboprop and took off to go down to Winston-Salem and climbed up to altitude and it blew an engine right there, thereby demonstrating the importance of reliability in turbines. This wasn’t a steam turbine; it’s a gas turbine, and thereby perhaps endangering the entire future of the robotics institute. Turned around, came back into Pittsburgh, landed. No problem. During that time when we were circling, coming back in to land, I found out that Raj had spent part of his early career as a fighter pilot for the Indian air force, which I hadn’t realized. Anyway, so I was not part of that research project, but I watched some of that early stuff and it was kind of interesting thinking about the industrial side of it as well as the mobile robot side. Those two themes have continued for the rest of the duration of the institute. Mobile robots and industrial automation. Somewhat to my surprise, the mobile robots have grown a lot faster than the industrial automation part of it and that’s been the big thing that the robotics institute has been primarily known for. Jumping ahead, 1984, Takeo Kanati said, “So what are you guys interested in? DARPA has asked us for some ideas what they should do next.” I said, “Well, I’m interested in outdoor mobile robots.” It turns out that was the right answer. DARPA was starting a program called strategic computing. You guys may want to talk with Clint Kelly who was at DARPA and putting that program together and has since retired as executive vice president of SAIC and still sits on our advisory board and other advisory boards. DARPA wanted to say let’s have some revolutionary over-the-top new ways of having massively parallel computing, the thinking machine. The butterfly at PP&N, etc., and if we had this great new computing it should be useful for all kinds of different applications. Let’s pick oh, say three applications. Why three? Well, one to keep the Army happy, one to keep the Air Force happy, one to keep the Navy happy. The Air Force one was an intelligent copilot and a lot of it had to do with speech understanding so that a pilot of a single-seat aircraft could speak and have the system understand what he was up to. The Navy was fleet management and weather forecasting and so forth. The Army was autonomous ground vehicles. So they were launching the autonomous land vehicle project. So as a graduate student, I was busy helping write the proposal for what we would do with autonomous land vehicles. I defended my thesis in September of ’84 and Raj came up to shake my hand. He said congratulations. I was thinking I was going to take a couple weeks off to figure out what I wanted to do next. He said, “Congratulations. What you’re up to next is a meeting in my office starting in five minutes where we’re going to talk about autonomous land vehicles.” That was my break between finishing my thesis and starting my post-doc. And we had submitted the proposal for what was to become the Nav Lab in August. We hadn’t heard back yet, but Raj said, “Let’s get started anyway.” We heard back pretty quickly that they wanted us to work on road following, so we built a series of vehicles to take outdoors and to bounce around off road. We had worked on vehicles until then. My thesis was still start-stop, start-stop, and our vehicles still had off-board computing. Now remember this is 1984. State-of-the-art computing was a VAX 11784 which was oh, about the size of the wall, maybe eight feet long, maybe eight feet tall, a couple of racks with whole million instruction per second processing power. A whole a VAX meant yeah and an attached digitizer that took up a whole rack. So the robots we were building were sort of the size of a table and the computer was sort of twice the size of a table, so we all had long umbilical cords. We said, “All right, if we’re going to go outdoors where we can’t have an umbilical cord and if we’re going to carry the computers with us, we need to have a real truck. So we built Nav Lab 1 which was a Chevy van and it needed to be that big because we wanted to have onboard sensors and onboard computers and onboard power supplies and onboard graduate students. I wanted my students to be right there rather than back in the lab for a couple of reasons. If there’s a bug, they could be right there and they could fix it and then they could keep running the experiments. They were highly motivated to have high-quality software because they were going to be, as the saying goes, first at the scene of the accident. You write much better software when you know you’re going to be riding on and very importantly when you talk about gut feel, if you’ve got an instability in your low-level controller and you’re riding on the vehicle and it’s going down the road like this, it gives you a gut feel that you better fix this and now. So we loaded this up with at the time Sun 2 workstations, loaded up with graduate students, and took it out and really had vehicles running continuously in the natural outdoor world for the first time and that was a big deal. It was a lot of fun to be able to do that. Also because we weren’t tied to an umbilical cord, we could run here in the park. We could run out on the streets around here. We could take it up to my house and run it around there. We could take it to the local slag heap and run there. We could run off road, on road, really start to discover what it was like to run in the outdoor world.

Q: Did you need some kind of special permission to run those kinds of vehicles around?

Chuck Thorpe: We never had special permission to run those vehicles. We always had a safety driver sitting there and if something went wrong, we had a great big red button. You just hit that and you can take control or we deliberately designed the steering motor to be under powered so that you could overpower it and the amp would sense that and fall and you could take over and steer. We had the police show up once. When we were running it in my neighborhood, I think the neighbors weren’t sure what all of this noise was and because it had loud generators and stuff like that and so the police came. We put them on board and gave them a ride and they weren’t quite sure what to make of it. We did try to get permission to run in the drivers’ test facility for the City of Pittsburgh because we thought here’s a nice place. It’s got parking areas where you can do autonomous parallel parking and they denied permission on that. I think they didn’t really believe that we had an automated vehicle. I think what they thought is these guys must be from a driver’s ed school and they’re trying to build very accurate maps so that they can teach people how to take the test without really knowing how to drive. So that police didn’t trust us enough to do that. Other than that, we’ve never had any issues with that.

Q: How many grad students did you have in the truck?

Chuck Thorpe: About the most you could squeeze in at a time was five and so that was about the most that we would have working on it at the same time. For instance, one person working on the laser scanner, one person working on the road following vision system, one person working on the path planning system, one person working on the system software that tied it all together, and then my job was to go fetch the donuts and to make sure that the other people all worked harmoniously together.

Q: How much of the architecture of that system was derived from your earlier work or Hans Morevec’s earlier work?

Chuck Thorpe: Not a lot of it was derived from Hans’s or my early work. Takeo Kanati asked us an interesting question at the beginning of the autonomous land vehicle project and our part of it which was Nav Lab. He said, “Give me a unifying characteristic of all successful computer vision systems.” Well, we had to think for a while because there weren’t very many successful computer vision systems by ’84, so “We give up, Takeo. What are you fishing for?” He said, “They were all done by one person and now with the autonomous land vehicle, you have Martin Marietta as an integrating contractor and they’re working with University of Maryland and Carnegie Mellon and Hughes and SRI and ADS and trying to bring in UMass and thinking machines and how are you going to have all of these people work together?” That was the first time we really had to take a step back and say let’s think about architectures, not just one person having it all in the back of their mind, but how do you build systems where all of this stuff can play together? How do you build interfaces so that one person’s obstacle detector can talk to somebody else’s path planner? So that was one of the roles that Carnegie Mellon played was thinking how the different pieces would fit together and supplying some of that system integration software to our friends at Martin Marietta. They were the integrating contractor on the big autonomous land vehicle. We were sort of a leading-edge integrating contractor figuring out how different bits and pieces can fit together.

Q: What were some of the strategies you used for that kind of integration?

Chuck Thorpe: Figuring out how it would all work together. We talked about a lot of different things. Should a robot have point-to-point connections or should it have a blackboard? How should the layers of the architecture be put together? Brooks was going around saying architectures are hooey. It should all be subsumption. You should just have something which can avoid obstacles and on top of that layer something which avoids obstacles while heading in some random direction and on top of that have something which avoids obstacles which heads in some random direction and the random direction is biased toward a goal. We said, “Well that’s a good idea if you’ve got artificial insects exploring some terrain, but if I’m going down the road at 60 miles an hour, I don’t want to be probabilistically right.” I want to have something much more tightly wired that works when I’m going down the road. So we had several different vision systems all of which tried to spit out the same description of the road to the road following software. We had a decoupling of the local navigation which was steering down the road versus the mapping and global navigation system which was telling us how to go down the road. That was important for the following reason. People who tried to put the whole system together into one coordinate frame, you were doing fine until you saw a landmark and oops, you all of a sudden saw a landmark and you said I’m not actually here, I’m here, and then your whole world jumped, but then you had to keep track of but the road was right here. Is the road over there and I’m over here or was the road supposed to jump with me? We said no, no, no. We’ll keep those separate. We’ll have a local coordinate frame that says they saw an obstacle here. I saw the road here. I’m steering this way and if we say, “Oh, that obstacle is actually the landmark over here, oh, now I know that my entire local map is ten feet off from my map and I can keep around that ten-foot offset and any time I need to go look at the map I know to apply a ten-foot offset, but I haven’t tried to disturb the local map so that I know in local relative terms how to navigate to avoid that obstacle while going down the road.” So those separations of the global frame from the local frame, those having multiple things feed into the local frame, the dealing with the asynchronous nature of landmark updates, obstacle detection, road detection, those were all some of the core clues that we had to look at.

Q: And did that approach differ from other approaches at the time like say Aaron Stickman’s and Germany?

Chuck Thorpe: Yes it did. Dick Munns [ph?] had a very, very structured format where he had a small loop and then a bigger loop and then a bigger loop and a bigger loop and Coleman filters at every level. So something turning the steering wheel and wrapped around that was something that closed the loop to the vision system seeing the road and wrapped around that was something that steered the camera to look further down the road and it was very tightly integrated so that your prior probabilities for the road was going to be taken into account. The German autobahn rules for how quickly the curvature of a road was allowed to change. That’s a wonderful structured approach if you have a wonderful structured problem. Some of our roads in western Pennsylvania don’t follow the German autobahn rules. Some of our off road driving didn’t follow any rules at all. We didn’t think that we had those structured prior probabilities. We thought we had more asynchronous things coming and interrupting us. So we weren’t as focused on the structured environment and going at high speeds. We were much more open to lots of different inputs coming at different speeds. One of the things that I said at the time and I still believe, only half kidding, is that if you look at the way people design robot architectures, it reflects the structure of their research group as much as it reflects the structure of the particular problem. For instance, NIST, the National Institute of Standards and Technology. These guys do standards and technology and the Nazram [ph?] architecture that Jim Albas [ph?] did was one layer and then another layer and another layer and another layer and it was the millisecond layer and the 10-millisecond layer and the 100-millisecond layer and the second layer and the 10-second layer and the 100-second layer and it was sort of the millisecond layer was sort of looking a few centimeters ahead and the 10-millisecond layer was looking a meter ahead and the 100-millisecond layer was looking 10 meters ahead and the second layer was looking-- etc., etc., etc. Well, when you think about it, NIST is a government agency. You’ve got your researcher who’s a member of a team who’s a member of a group who’s a member of a-- I forget how they have it-- division, branch, etc., etc., etc., and when Jim wants to get something done he assigns-- all right, here division. You have the problem of building the robot. Within the division, branch. You have the problem of building the 100-millisecond level of the robot. Okay, within the branch, you have the problem-- so this nested layer is how NIST is organized and that’s how he organizes his work to get it done and so it’s how he organizes his people. I didn’t want to organize the system like that partly because my graduate students don’t organize into nice, neat hierarchies. I’ve got one who wants to work on collision of winds with stereo vision. One of them wants to work on collision of winds with a laser scanner. One of who wants to work on reactive collision of winds using sonar. These are going to be operating at different distances at different time scales. They don’t fit into a nice hierarchy. I don’t want to have an artificially imposed 10-meter barrier and 100-meter barrier when I’ve got a stereo vision system that works up to 25 meters. Let’s let it flow naturally at the limitations of the sensors. So I wanted to organize my architecture around what the sensors could do and what the graduate students could do and let different things flow in at different times. The other extreme was the Minsky extreme. Marvin Minsky wrote this “Society of Mind” book where you could read the book in any order because every two pages was a separate chapter and that was Minsky’s theory as to how minds should work so that you have a whole bunch of experts and they sort of collaborate together to give the illusion of purposeful motion. I didn’t want the illusion of purposeful motion. I wanted my robot to go down the road and not run into things. So I wanted a little bit more structure than the MIT AI lab. Not the big hierarchical structure of Dick Munns or of Albas. So they had different philosophies and different organizations. Other players in this, have you talked with Red Whittaker?

Q: We’ve been emailing him. Didn’t get any response, but I was emailing him last year.

Chuck Thorpe: Red was a professor here in civil engineering who had played around with robots for a variety of civil engineering applications and then Three-Mile Island happened and when Three-Mile Island came to us and said, “Could you build a robot to go into the basement of TMI?” Red stepped up to the challenge and built a very good, very simple but very reliable robot just to go in and take pictures of TMI. They liked that so well they said, “Okay, could you go in and bring back wall samples?” So he built a slightly more sophisticated robot to go in and drill core samples and bring back pieces of concrete from the basement of TMI so they could tell how far in the contaminated water had soaked so they could tell whether it was possible to decontaminate it by just abrading the top layer of it. That worked so well they asked him to build the workhorse. So he and his graduate student, John Bairs [ph?], built this monster stainless steel vehicle that could reach up 25 feet and rip things off of the ceiling and had high-pressure hoses, hydro jets that could cut through concrete and replaceable grippers to grab different kinds of things and all tether controlled, all beautiful polished stainless steel so it could be decontaminated. If you haven’t met Red, I have this theory that just like architectures sometimes look like the person, robots sometimes look like the person who built them. Red is a mountain climber and a gold-gloves boxer and a Marine. When he was in the Marines he played football for the Marines. I asked him what position he played. He said end. I said offensive or defensive? He said both. Red is 6 foot 5, 250 pounds, and when he talks about his robots he climbs up on the table and demonstrates how his robots are going to rip the ceiling down and he builds robots that can climb up on the table and rip the ceiling down and so forth. It’s great working with Red. Red and his team was the mechanical geniuses behind Nav Lab 1. The first real outdoor mobile robot that we built. He’s gone on since then to build, oh, I think probably over 100 robots and they’ve been in volcanoes, in Antarctica and in Alaska, in coalmines, mapping abandoned coalmines, all over the planet, in the Atacama Desert of Chile, in Devon Island north of the Arctic Circle. Of course in the DARPA Grand Challenges and the next one’s going to the moon. If you can’t talk to Red, you should talk to some of his students and colleagues and get Red stories. Get Red robot stories and get Red personal stories. He did box with an orangutan once.

Q: Did somebody see? Who saw this?

Chuck Thorpe: Red will tell you the story.

Q: Okay.

Chuck Thorpe: All right.

Q: Need to change tapes.

Chuck Thorpe: Okay, where were we? I keep rambling on about other people and strange stories.

Q: No, this is great. With Nav 1, we discussed the architecture, right?

Chuck Thorpe: So, Navlab 1-- let me get this straight-- it really came to life in about ’86, and my son was born in ’86, and we had contests to see when Navlab 1 was moving at a crawling speed, my son was moving at a crawling speed. When Navlab 1 picked up speed, my son was walking and learning to run. Navlab 1 got going a little faster, my son got a tricycle. I thought it was going to be a 16-year contest to see who was going to drive the Pennsylvania Turnpike first, because they were each progressing at about the same speed. Unfortunately, this really smart graduate student named Dean Pomerleau came along and built a neural network technique, which managed to be the most efficient use of the computing researchers that we had, and the most innovative use of neural nets at the time, so in 1990, he was ready to go out and drive the Pennsylvania Turnpike. ALVIN was a simulated neural net that you would drive for a few minutes, and it would learn. If the road looks like this, you turn the steering wheel like this; if it looks like that, you would turn the steering wheel like that. So if you trained it on the road that you were driving on, it was very good at spitting out steering-wheel angles, and he was able to outdrive the top speed of Navlab 1, which was about 20 miles an hour. We built Navlab 2, which was an Army Humvee, and Dean took it early Sunday morning and drove it up by 79 to Erie, Pennsylvania, and was able to drive, then, much faster. This was kind of revolutionary. People thought if you wanted to drive that fast, you had to have common filters and clothoid models of the road and detailed models of the dynamic response of your vehicle, and all Dean had was a simple neural net that learned, “The road looks like this, you steer like that,” and it worked pretty well. Now, it turned out that it worked pretty well. It had a little bit of hunt to it going down the road, so to fix that he had to have a little bit of a model that he built into it, and the model basically said, “Okay, I know that if I’m steering like this that what I’m actually doing is headed toward a point up there on the road, and so the way to take out the lag is to record where these points are that I’m headed toward, and then build a little path-follower that just tracks these points that I have going down the road.” Dean went on to build smarter and smarter systems, and at some point said, “You know, I’m tired of having to retrain it. Let’s see if I can build a really smart system that figures out what the road looks like without me having to train it.” And so he built a smarter system called RALPH-- the Rapidly Adapting Lane Position Handler. The acronym came first. He liked “RALPH,” and he had to figure out something it would stand for. And RALPH would look at the cross-section of the road and would say, “Oh, you know, there’s a dashed line here, there’s a solid line here, there’s a crack at the edge of the pavement here, there’s a expansion joint here, there’s a smear of the oil down the center of the lane here, there’s a guardrail over here. All of those appear to be long, linear features that are pretty reliable. Let’s just track all of them.” He put RALPH on a series of vehicles. It worked pretty well. He wanted something that was easier to drive than Navlab 2, which was this big, boxy Hummer, so he took his own Honda Accord, he put a robot surplus motor driving a belt to turn the steering wheel, he suction-cupped a camera to the rearview mirror, he put a laptop computer in there and plugged it into the cigarette lighter, and he had a robot car. So Navlab 3 was Dean’s own Honda Accord, and that worked fine. Navlab 5, Delco loaned us a Pontiac minivan, and we put a extra truck battery in the backseat to power the computer, and hooked ourselves up to the cruise control, and, again, laptop computer and camera, and Dean and our student, Todd, took it from Washington, D.C., to San Diego, California. Why D.C. to San Diego? Well, they needed to get it to Denver to do a demo, and they said, “As long as we have to get it to Denver, let’s back up and start in D.C. And we’re going to have to do a demo two years later in San Diego. Let’s just keep on going on out to San Diego so we can collect some video as to what the roads look like on the way out there. And as long as we’re going out there, let’s swing through Las Vegas and get an Elvis impersonator, and let’s go through Hollywood and see if we can get Jay Leno excited, because he’s a car buff.” And so they did this series of stops along the way, and they were able to go-- this was in ’95-- from D.C. to San Diego, 98-and-a-half percent of the way autonomously. This was their “No Hands Across America” run. The other one-and-a-half percent was things like new asphalt in Kansas, at night, with no stripes painted on it. Well, that’s a challenge for a human. Given the cameras of the day, it was a even bigger challenge for the autonomous system, so they had to drive by hand. Or going through rush-hour traffic around Denver in a construction zone, they had to take over and drive by hand. But it really worked well, and it had some real scientific payoff. Driving in this part of the world, you’re used to, “Well, there’s a painted line, and if it’s yellow, that’s the left edge, and if it’s white, that’s the right edge, and if it’s a dashed line, that’s the center line.” And a computer vision system is pretty good at following a dashed line or a solid line, et cetera. You get out to Southern California, where they don’t have snow, and they don’t have painted lines; they have botstots [ph?]. If you haven’t been to Southern California, you have these raised pavement reflectors. You can’t put them on Pennsylvania roads, because the first snowfall, the plow would just scrape them all right off the road. Botstots are great for humans, but if your vision system is expecting painted lines, and you only see a dot every once in a while, you don’t know what to do with it, so you have to tweak your vision system to deal with botstots. In Kansas, some of the gravel that they use to do their asphalt is red, so if you have a color-based system, instead of yellow-on-white or yellow-on-black, now you have yellow-on-red. So you got to learn how to deal with all of these kinds of variations in order to build a reliable vision system. The run in San Diego was important because, at the time, we were part of a DARPA contract and part of a DOT contract-- two DOT contracts. One of them was a driver warning system-- run-off road collision countermeasures, and we’ll talk about that in a minute. The big one was the Automated Highway System. Automated Highway System, the goal was to build fully automated vehicles that could drive down the road, and we knew that the demo for that was going to be in San Diego. There’s a stretch of I-15 that goes through Miramar Air Base that has HOV lanes. In the morning, they’re inbound; in the afternoon, they’re outbound. During lunch and at night, they’re not used, so they said, “That’s where the demo will be.” And we went out and collected some data and discovered there were botstots, figured that our vision system could run pretty well on that, and worked on building better and better systems for that. Automated Highways was an enormous project. It was General Motors, it was Hughes, it was Delco, it was Bechtel and Parsons Brinkerhoff, and it was University of California, Berkeley, with their PATH program, and us. The PATH program-- those guys are, by nature, controls people, and they live in the crowded Bay Area, where you’ve got five lanes of traffic going each way, so their notion is, “Let’s take over one lane and have it be a dedicated lane only for computer-controlled vehicles, and let’s put magnets in the road, and let’s have the computer-controlled vehicles all talking to each other, and all talking to a centralized control system.” So it’s... again, it’s this nest of control loops: a control loop to control your own speed, and a control loop to control the headway to the vehicle in front of you, and a control loop to control a platoon of vehicles, and a control loop to tell this platoon what it should be doing, et cetera, et cetera, et cetera, and a control loop watching the buried magnets and controlling your lateral position-- a great idea if you can take over a lane for only automated vehicles, and then probably take over another lane so that you can have manually driven vehicles get up to speed and then merge, et cetera. Not a very good idea for Western Pennsylvania, where, if we’ve got two lanes going each way, you can’t afford to take over one lane. We said, “Let’s take the opposite approach. Let’s build our vehicles which are so smart that they can be automated even running out in mixed traffic-- mixed in with other vehicles.” Of course, being robotics people and being perception people, this is a lot more fun than drilling holes and burying magnets down the road, so that was the approach that we built. For that, we built Navlab 6, 7, 8, 9, and 10. In fact, that’s what these guys are. Six and seven were Pontiac Bonnevilles. Those were sweet vehicles because they had electronic cruise control, so it was very easy to tap into it, and General Motors gave us prototype electronically controlled brakes and electronic power steering to tap into. Navlab 8 was a minivan. We got tired of waiting for them to retrofit these vehicles, and so we just went down to the local Chevy dealer and bought a minivan. And 9 and 10, Houston METRO gave us full-sized passenger buses that we retrofitted and turned into robot buses. So our demo, we showed these five vehicles. We showed that we could use them in driver assist mode and in driver warning mode. Then we showed that you could hit a button and they became automated, and you’d take your hands off the wheel. And they could talk to each other if there were other automated vehicles, so if one automated vehicle saw an obstacle, it would broadcast the location of that, and the other vehicles would be smart to change lanes. But our vehicles were smart enough to run mixed in with other vehicles. For instance, if a vehicle was using its radar to follow another vehicle and it was going too slow, it would say, “You know, I’d like to change lanes”; use the vision system to say, “Is there another lane next to me?”; use the side-looking and rear-looking sensors to say, “Is that clear?”; because it was running with other vehicles, put on the turn signal-- these are polite vehicles; change lanes; and then pick up speed and pass the slow-moving vehicle. So we’re working on all of this kind of stuff back in 1997, and it was kind of fun. More than kind of fun; it was great fun. We did the demo. Berkeley did their demo, we did our demo. The U.S. Secretary of Transportation came out. He chose to ride on one of our vehicles, partly because we had a cool demo, partly because we had buses and he wanted to demonstrate that this was friendly for transit. At the end of the demo, TV camera running, car’s in the back. Secretary Slater, standing there, gave this wonderful speech. It went, in part, like this: “Ladies and gentlemen, five years ago, Congress set us a goal to build the Automated Highway System. Today you see before you the first impressive steps towards that goal. Now the president has given us a new goal-- to balance the federal budget.” And we said, “Oh, no!” And, in fact, that’s what happened to Automated Highways. He said, “That was a great demo, and we think that it’s too long-term, and so we’re going to redirect DOT to work on shorter-term funding.” Well, the shorter-term funding is collision avoidance, and that was the other project that we were working on. DOT had people working on forward-looking collision avoidance, and lane-change collision avoidance, and intersection collision avoidance. Our task was single-vehicle lane departure collisions. If you think about people running off the roads, there aren’t that many collisions of that sort, but they tend to be deadly. A third of all fatalities are single-vehicle roadway departure. You fall asleep, you drift off the road, you wrap yourself around a tree. The lane-change merge ones, those tend to be fender-benders. The rear-end collision, they’ve designed crumple zones so that you bend a lot of sheet metal and you blow up your airbag and you walk away from it. If you wrap yourself around a telephone pole, that’s pretty hard to survive that. So we did a lot of work on the human factors. We did a lot of work on the accident causes. If it turns out that you’re drifting out of the road because you hit a patch of ice, well, warning the driver that you’ve drifted out of the patch of road because you’re sliding on ice isn’t going to do any good. But it turns out that a lot of it is driver inattention, drivers falling asleep, et cetera. So we built a very nice system that could watch the road, and if you started to drift out of the road, warn you, and predict that you were going to drift out of the road before you even drifted out, and, even better than that, could watch while you were driving, and if you started to follow the telltale patterns of a drowsy driver, tell you that it was time to stop and get a cup of coffee. Dean spun off a company to build those kinds of warning systems, and commercialized that. His company was sold first to a Boston-based computer vision company, and now to a Japanese electronics company, and that’s very rewarding to see this kind of stuff being out there in the market and starting to save people’s lives. It also has a very funny personal tie. That little kid who was racing Navlab 1 just finished his master’s degree in robotics at Carnegie Mellon University, and now works for that company. His boss there is one of my graduate students, the guy who was in charge of building the architecture to merge this whole thing together. So when Jay, the senior guy who was interviewing Leland, my son, what kind of questions should he ask? “Leland, how long have you been interested in obstacle avoidance and safety in robot vehicles?” “Well, Jay, when I was three years old and you had me drive my bicycle on training wheels out in front of your vehicle to see if your software could stop it, ever since then I’ve been a real fan of improving the reliability and safety of automated vehicles.” It was a no-brainer interview, and it was very fun for him to get to work on that. So this is now a family mission to try to build intelligent robotics systems that can save people’s lives.

Q: You mentioned that you had the Autonomous Highway System--

Chuck Thorpe: Automated Highway-- NAHSC? National Automated Highway Systems Consortium.

Q: Yeah.

Chuck Thorpe: AHS.

Q: You mentioned that you were working on a helper, like an assist.

Chuck Thorpe: So the assist was the run-off road collision countermeasures part of it.

Q: And then there was the warning.

Chuck Thorpe: And that’s the warning part.

Q: Both of those were worked into the collision?

Chuck Thorpe: Yes.

Q: Okay.

Chuck Thorpe: The warning part is interesting. How do you figure out what kind of warning to give a drowsy driver? There is a few places in this world-- we used the one in Iowa-- a driving simulator. It’s like a flight simulator, where you have an aircraft cockpit in this thing that moves around in a big, graphic screen, only you have a car-- in this case, a Ford Taurus-- in this moving, shaking simulator with a big screen. So we would have people driving the car, and we would distract them-- ask a question like, “Look for the Frank Sinatra 8-track tape,” and there isn’t a Frank Sinatra 8-track tape, so they’re looking down here-- input a simulated wind gust so they’re drifting off the road, and then the system would warn and it would trigger an alert, and we would see if the person would react correctly to that. So, do you beep? Do you have a directional beep? Do you shake the steering wheel? Do you nudge the steering wheel? Do you play a tape of your mother’s voice saying, “I’ve told you it’s time to wake up”? I-- we didn’t know what was going to work. I thought sounding like an alarm clock would be bad, because someone who’s drowsy would be reaching for the snooze button. It turns out that a beep is pretty good, that a directional beep is not that useful. We found that nudging the steering wheel helped, because the drivers learned, “Oh, that means we should steer this direction.” Our colleagues at Daimler-Benz tried the same thing in Germany and found the opposite: that if they nudged the steering wheel this way, their drivers reacted by counter-steering, which, of course, would pull you further into the ditch and make you roll over and not be a good thing. So what we’ve gone to, mostly, is just a beep. And you want to set the system so that it’s sensitive enough so that, in normal driving, it sometimes beeps. So you say, “Oh, yeah, sure enough, that meant I was getting close to the edge of the road.” And so you program yourself to say, “Yep, that beep means I’m getting too close to the road”; not so often that it’s a nuisance, but often enough that you’ve learned and you’ve remembered. You don’t want the first time that beep goes off to be the time that you have to react instantly; you want that to be a response that you’ve gotten used to. And we’ve eliminated the beep for some things. If the vehicle, all of a sudden, violently swerves, well, that may be because a moose just jumped in front of the road, and you’re deliberately driving off the road, and the last thing you need is a beep going off during your moose-avoidance maneuver, so we suppress the beep for that; suppress it if you turn on the turn signal, because then you meant to pull off the edge, et cetera.

Q: In general, in these projects, did you have other collaborators that worked with you?

Chuck Thorpe: Other faculty?

Q: Mm-hmm.

Chuck Thorpe: Well, Dean Pomerleau, who had been a graduate student, became faculty and worked on that. Todd Jochem, who was with Dean on the “No Hands Across America” ride became a postdoc and worked on that with us. More on the off-road work, Marcele Baer [ph?] has been our 3-D vision guy forever and ever. Tony Stentz has been our off-road path-planning person forever and ever, and Tony was a graduate student in the Navlab group who joined the faculty and came onboard with us. So, yeah, there’s a big group of people that we work with all the time.

Q: I’m curious about this demo in San Diego. You mentioned Berkeley. Who was leading some of the other demo research groups?

Chuck Thorpe: Well, what the Berkeley guys showed was eight cars running together, four-meter gap between the cars, and then showing how the cars could split up. So if this guy in the middle wants to exit at the next exit, you split, and then you let this guy change lanes, and then you merge back together again, and so forth.

Q: And who was doing that at Berkeley?

Chuck Thorpe: Steve Shladover, Jim Misener, We Bin Jong [ph?]. They’re the people who really ran that research project, and they’re still there, and they’re still active. They’re not actually on campus. They’re at the Richmond Field Station, which-- that gives them enough room to have a little track and bury some magnets in the ground, and so forth.

Q: What were the other big demos?

Chuck Thorpe: So those were the core demos that were part of the consortium. They also invited-- Toyota did a demo, Honda did a demo, Ohio State did a demo, Lockheed Martin did a demo... showing different kinds of technology. Ohio State was following a strip of metal put down with notches cut in it, designed to reflect radar. So they thought, “This is cool. We’ll have one radar that will look for cars, and we’ll also, by the way, pick up this strip of tape.”

Q: Who led that research group?

Chuck Thorpe: Umit Ozguner, which is spelled just like it sounds if you know Turkish.

Q: I do.

Chuck Thorpe: O-Z-G-U-N-E-R. And Umit is still with DOT’s [ph?]-- they’re scattered around, randomly.


Chuck Thorpe: Yeah.

Q: Pretty much.

Chuck Thorpe: And Umit is still there, alive and kicking and active. He and I were at a conference in Turkey in June. Let’s see. So those were the groups who did all of those demos. Each of the Japanese car companies has been working on various sorts of automated systems for a long time.

Q: Who leads those groups at Toyota, Honda, Mercedes?

Chuck Thorpe: I don’t even know anymore. It was part of a big Japanese government push. They had their equivalent of the Automated Highway group, and that also has disbanded, and I’ve lost the links with most of the people over there.

Q: I know Mercedes’ driver assist...

Chuck Thorpe: Mercedes has done some very good work. The other group which has gotten a lot of press-- well, two other groups that have gotten a lot of press over the last month-- Alberto Broggi, from the University of Parma... he just finished doing a 10,000-kilometer run from Rome to Shanghai. Now, that’s a cool run, and it encounters an awful lot of varied territory. We won’t know until he starts to publish the technical papers what the real technical meat is. His first vehicle, the lead vehicle, was going to be automated where the roads were good enough for the vision system to work, but then it was going to be laying down a trail of GPS waypoints, and the following vehicle, which was following the lead vehicle either visually or by following GPS waypoints, was going to attempt to be automated as much as possible. So we’ve seen the press releases, but we haven’t seen the scientific papers to know how much as much as possible turned out to be. The other thing that’s cool is that he was collecting all of the data, so he’s got terabytes and terabytes of data, and now he’s got to figure out how to index that and make selected samples of it available for the rest of the world. The other group that hit the press recently is the Google group. I don’t know if you followed what they were up to.

Q: Sebastian Thrun, right?

Chuck Thorpe: Sebastian. Yeah. And Chris Urmson called me the day before that broke, and said, “Look, we need to give the press the name of somebody outside to go ask questions of, in case they want an outside expert. We’d like you to be that outside expert. We’d like to explain to you what we did so that you can say something, like say, ‘Well, those are nice guys.’” And so he told me a little bit about what they did, and they’ve done a remarkable job of building very detailed maps and models. Now, it’s the Google group, and we’re very grateful to Google, but we think of it as the Carnegie Mellon West group. Sebastian, of course, used to be here, and then went to Stanford and Google. Chris is a Carnegie Mellon person on leave. James Kuffner is a Carnegie Mellon faculty member on leave. If you go down through the list, they-- everybody except for one on that group either is a Carnegie Mellon person or has a Carnegie Mellon heritage, so that’s our boys. And we’re delighted that they’re off with Google, and Google’s given them the researchers to do really cool stuff.

Q: You said that the Automated Highway ended up seeming kind of too far away, but there’s a lot of work that’s going into automated vehicles, whether they’re for civilian driving or for military purposes.

Chuck Thorpe: Exactly. So there’s a lot of-- the military robot work continues. The civilian work, if you go back to the 1939 World’s Fair, GM had their booth of the City of the Future, and it was automated vehicles running around. Ooh, they’re all excited. Then it kind of died down. In 1956, GM had the Firebird II, and this was... wow, is it a period piece! It has this glass bubble and aircraft-style controls, and the Fire-- the big tails of the back. And the vehicle was actually following a buried cable in the pavement, using these tube electronics with this big heat sink hanging out the side, and the videos of the artist’s concept-- you have this family going down the road, the father wearing his coat and tie, and the two kids in the back, and the mother wearing her dress, and they’re talking to the control tower: “Control tower, this is Firebird II. It”-- he says, “You have permission to enter the automated lane.” But part of the system really worked, and there was a big hoopla, and then the interest died down. It comes back around again another 30 years later, and there’s this big push, and then the interest dies down. Part of it is that you get these visionary pitches, and they’d say, “You know, but there’s budget issues, et cetera.” Part of it is waiting for the technology to get a little bit better each cycle. Part of it is limited attention span of the funders. If you had automated vehicles, you can do all kinds of cool things. You can eliminate accidents. We all know that 90 percent of accidents are caused, at least in part, by the humans, and you can make the system much safer. You can make it much more environmentally friendly. If you look at the number of hours that you spend idling in traffic, automated vehicles make for much smoother traffic flow, much denser traffic. If you sit there and you count the number of vehicles per lane per hour, the best you can do with human driving is about 2,000 vehicles. Two thousand vehicles at 100 kilometers an hour-- that’s 50 meters between vehicles. It’d be very easy to have automated vehicles at half that spacing and still be very safe, and eliminate congestion, not have to pave over more lanes of highways. You could start to do lots of other smart things. You could have cars-- zip cars or station cars. You wouldn’t have to own your own car, but you could say, “Oh, I need a car tomorrow to go to this particular trip. Could I have an automated car delivered to my door at 8:00 A.M.? And I’ll drive it as long as I need it, and then it can automatically go to the next person who’s reserved it,” et cetera. So there’s lots of ways that you could use automated vehicles to reduce the number of cars that we have. So there’s all of those kinds of things that are out there. What we’ve seen instead is the more incremental approach: “Let’s take the vehicles we have, and let’s make them smarter and smarter. Let’s make them more sensors, let’s make them smarter control. So-- let’s put adaptive cruise control on them.” That works to a certain point. At some point, if you start to say, “Okay, I’m going to use adaptive cruise control to keep the gap to the car in front of me, I’m going to use smart lane keeping to keep myself centered in the lane,” all of a sudden, people are going to say, “Oh, gosh, this car can drive itself,” and they’re going to take their hands off their steering wheel, and then they’re not going to pay attention, and then the car better be 100 percent reliable. And you’re not going to get to that 100 percent reliable unless you have a focused effort working on the fully automated system, and that’s what we really have to have. Now, for military vehicles, that continues to be a very important problem for all kinds of different reasons. We recently had the 25th anniversary of the first demo of the autonomous land vehicle, and had a party to celebrate it, and had people from Martin Marietta, who were the integrating contractor who built the vehicle, and the different groups of us who contributed to it. One of the people who spoke there was General Rick Lynch, now a three-star general. Twenty-five years ago, Captain Lynch was this young combat engineer who was assigned to come tell us what the requirements were like for a real vehicle operating in real wartime scenarios. And he said, “Sir, if you want to know what it’s like to operate an M1A1 in combat, sir... you ought to read Red Storm Rising! Why, there’s this place in there where they’re bombing down on Germany, and they just rip the governors [ph?] right off of the tanks, and they’re flying down these dirt roads, and they’re doing 50 miles an hour-- and, sir, that’s what it’s like to operate an M1A1 tank in combat, sir.” Twenty-five years later, we find General Lynch a three-star, just back from his latest tour of Iraq. He said that in his last tour, he lost 156 men in Iraq, 130 of whom were doing jobs that should’ve been done by a robot. He’s very serious about having robots do... not even the combat missions. It’s things like robots carrying logistics; robots sitting up on a vantage point, looking for snipers. We know that the most dangerous job in the military is the scout. If you send a scout over the hill and the enemy knows that those are the people calling in the gunfire, and the scout is, by definition, going into an unknown place. If we had robots doing the scouting, if you send a robot out there and it tells you what it sees, great; if you send a robot out there and it doesn’t come back, well, that’s already told you something about what’s happening on the other side of the hill. So there’s a lot of those kinds of things that the military’s interested in-- some of which look like combat, and some of which don’t. There are, of course, these big ethical questions about robots, combat, particularly when you start putting firearms on them, and that’s a whole, very legitimate set of things to think about. But there’re a lot of unarmed roles for robots that are very interesting.

Q: In terms of the autonomous vehicles on the highways for civilian use, what are some of the hurdles that you see, in terms of law or social acceptance, and that kind of trajectory that you see it taking?

Chuck Thorpe: Let me tell you, first of all, what my group is up to now, and then I’ll come back to that. What we’ve been doing for a lot of years now is working for urban driving. So this is back to the warning system, but how could we do a warning, say, for transit buses? Transit buses are very, very safe, but they would like to be even safer, and they operate in crowded urban environments, and if you think about it, the rearview mirror on a bus has to be way up in the air so it doesn’t clunk pedestrians in the back of the head, but that means there’s a big blind spot underneath that mirror. So we’ve been putting sensors on buses all the way around, to tell bus drivers where it’s safe to go and where it’s not safe to go. And then you have to be smart about things like turning off the collision avoidance when you’re sitting there with your door open, because it looks like, “Oh, no! That pedestrian is about ready to run into me!” Oh, he’s not about to run into you; he’s about to step onto the bus. So, those kinds of things. And it’s been very interesting to work on that, because the crowded urban environments have such challenges, but, of course, that’s where a lot of the accidents occur, so we have to be very good at operating in those kinds of environments. So that’s a whole set of very interesting problems that don’t require full automation, but the smarter your vehicle can be, the better off it can be in generating those kinds of warnings and advices. Driving in full automation has a number of social issues, some of which make sense and some of which make less sense. There are people who really like driving, who would love to have automated vehicles so that all of the other vehicles are automated so they can zip in and out among them in their manually driven vehicle. Okay, that’s legitimate. You have to have people doing driving-- automated vehicles driving. People worry that automated vehicles might be expensive, and so only rich people could afford them. Okay, so that’s a legitimate argument that says, “Maybe we ought to put more of the intelligence in the infrastructure so it’s supported by taxes, so it’s available to everybody, and less of the expense into the individual vehicles so that they do become more affordable for everybody.” If you have to build dedicated lanes for automated vehicles, then there’s a question of who supports that. If taxpayers are paying for lanes that only people who can afford new cars can drive, that doesn’t look very good. Maybe you build those on toll roads, and people who are willing to pay for the convenience of an automated vehicle are willing to pay for the additional costs of the toll road. If you had an automated vehicle that could drive itself to work, would you live even further away from work than where you do now, and if so, would you just cause more congestion, and if so, what are the social effects that would counteract that? Could you counteract that by making buses more efficient, and have personal mass transit? Could you counter that just by appropriate taxing and pricing strategy? So, when we studied Automated Highway, there was a technology study stream, there was a whole ‘nother stream studying some of those social and regulatory issues, and there was another very interesting stream studying precursors: What other high-tech things have happened in the past that we could learn from? For instance, if you go back in history, the elevator was around for 500 years, but it was deemed too dangerous to use for people, so they used elevators to transport food up to the master’s dining chamber without grungy servants having to walk up there, or to haul freight, or they didn’t trust people because they were worried about the rope breaking. Mr. Otis, after he invented the safety break, built this 40-foot shaft in Central Park, had himself hauled up in the air, and the cable sawed through, to demonstrate that his safety break would hold. It was that invention that made the elevator safe, and it was the safe elevator that really changed cityscapes the way that we know them. Used to be the tallest building you could build was seven floors, and the poor people lived up on the seventh floor because they were the only ones willing to walk that far up. Now, with the Otis elevator and with Louis Sullivan, you could start to build monster skyscrapers going up 20 floors, and put the penthouse on the top floor, because people started to understand that elevators were a safe mode of transportation. Even when I was a kid, I remember manually operated elevators, or if you went to a classy downtown store with push-button elevators, there was still an elevator girl-- sorry, it was sexist-- wearing white gloves, to push the button for you and to call out the floors. What was the change that took it to people being perfectly happy with automated elevators, and perfectly comfortable to push the button themselves? There’s an example of automated transportation which is so safe, people don’t even think about it anymore. How do we get automated cars to be just as safe so people have that same level of social acceptance? Here’s another example: It used to be if you wanted cash, you walked into the bank and you talked to a real live living person, and you gave them your passbook, and they stamped in the passbook what your current balance was, so you had proof right there on paper. People don’t do that anymore. I can’t tell you how long it’s been since I’ve been into a bank. We trust these machines to-- you stick your card in and it spits out money, and your balance is correctly stored in some giant electronic brain. What did it take to make people comfortable with a machine handling their money, and what would it take to make people comfortable with a machine handling their transportation? Here’s an example that doesn’t have as happy an ending: the supersonic transport. There was this big hullabaloo in the 1960s. We weren’t going to be-- we were going to have supersonic transports whisking us away to Europe in two hours. The French Concorde worked. The guys at Boeing who were designing the SST used to walk across the hall and laugh at the guys designing this 747 that nobody was ever going to fly. The 747, people got scared. They decided that the only use for the 747 was going to be cargo, so they designed their plane to have this big tube that you could fill with cargo, and that meant they had to move the cockpit up on top, which is why the 747 has the shape that it has. Well, of course, the SST never got built for a variety of economic and environmental and social reasons. The French built their Concorde; we never built their SST. What lessons should we learn about that, that kept that from being built, we should learn about for the Automated Highway System?

Q: What about in terms of either an automated system overriding the driver, or the driver’s ability to override an automated system, and what kind of issues do you see there?

Chuck Thorpe: Well, there’s very big issues about who do you trust more-- the car or the driver-- and how do you make that handover, particularly if you don’t know what the state of the driver is? The weirdest concern I got was, “Suppose that a driver drove his car onto the Automated Highway ramp and pushed the ‘Go’ button, and the automated system took over, and then he had a party with his buddies and got completely smashed, and now he wanted to take control back, but he was legally drunk. Is it the responsibility of the automated system to detect that he’s drunk and not let him resume control?” I don’t know. That’s-- this is pushing the boundaries of social responsibility, of technology, of, “Do you want Big Brother assessing whether you’re competent to resume control of your vehicle?” That’s tough kinds of questions. In the U.S., it’s particularly difficult to make some of these decisions. The United Kingdom has one police force; the U.S. has 18,000 police forces. We have 50 state Departments of Transportation, we have 390 metropolitan planning organizations, each of which has their own standards. This is another reason why Deckmanns [ph?] could have his system, knowing what the Autobahn is going to do. We’ve got different standards all over the place for things like lane widths and markings and signage, et cetera, so we don’t have those same kinds of standards. The same thing will be true when we introduce automated vehicles. We have to think about how all of those things carry out. Here’s another one: Suppose that we made a system 10 times safer than it is today, and saved 90 percent of the lives that are currently lost due to driver error. The car companies will tell us that the 90 percent of people whose lives are saved are not going to come up and offer General Motors a million bucks for having saved their lives, but if there’s one car that, through a bug, accidentally causes a fatality, it is very likely that GM will be sued for the cost of that fatality. How do you balance all those off? The real conclusion that people came to is, if you look through history, as a society, if there’s that big of a benefit, we have always eventually figured out how to take advantage of that benefit. It hasn’t always been clear, it hasn’t always been fast, but if we could save 90 percent of the lives that we’re losing right now, as a society, we’ll figure out the way to do it.

Q: What about in terms of the recording of the driver’s activities and things like that? I know there’s a lot of concerns about privacy and surveillance, and that that system, if you made a mistake, would say, “Ah, you were responsible for this accident,” and people wouldn’t accept that.

Chuck Thorpe: The-- well, there’s a lot of that going on in cars today, that they’re recording how fast you were driving at the time of a collision, and so forth. So that’s an issue which is there and needs to be discussed, and proper uses of that technology. Again, there’s ways of making safeguards like that. This came up, oh, 50 years ago on the Pennsylvania Turnpike. Well, the Pennsylvania Turnpike, when you get on at an exit, they hand you a card; when you get off, you give back the card so they can figure out how much toll to charge you. They could very easily say, “Hmm. You’ve just come 200 miles in three hours. I don’t think you were driving at the legal speed limit.” It’s illegal for them to use the information for that purpose. All right, so there’s a case where there are legal safeguards against them using this information for a purpose other than collecting toll. It’s easy to enact the same kinds of safeguards. So, yeah, these are issues, but these are issues that you can deal with. I’m the wrong one to deal with them. I’m a technology guy. But these are the kinds of things that technology guys have to make sure that somebody is thinking of if this technology’s going to make a difference.

Q: And as a technology guy, what do you look back on as your biggest accomplishments or the most significant work you’ve done?

Chuck Thorpe: I’ve been happy about all kinds of different things. I’ve been happy about graduate students. I expect my graduate students to be smarter than me. I expect them to come in and say, “Look at this cool idea.” And I’ve had 17 Ph.D. students, and they’ve gone off to do all kinds of cool things that I couldn’t have imagined, some of which look like robotics, some of which look like all sorts of other different things, and it’s been fun to watch them, and watch their imagination, and watch them succeed. The most fun I’ve had is riding on automated vehicles. I forgot how scary it can be until I went and rode on somebody else’s automated vehicle, and I thought, “Wait a minute. I don’t know the guys who wrote the code for this thing. What if it crashes? What if it”-- when I’m riding on the vehicle built by my own graduate students, it’s like, “Cool! Look at this! We’re going down the road! Sure enough, it’s seeing those kinds of things! All of our design is working!” That’s an awful lot of fun. It’s that real gut feel that what you’re working on is doing good stuff. It’s not some esoteric theory that you have to wait until you go off to the right conference, where there’s the right people who understand that set of differential equations. I can make a video of my car and show my mother, and she can understand, “Yeah, automated vehicle going down the road.” That’s been the most fun. Seeing this technology get spun off into a company and go out there and start to save lives-- that’s another kind of big emotional high. So all those things have been fun.

Q: Are there some specific companies that came directly from your work?

Chuck Thorpe: Well, Dean Pomerleau’s company, which was AssistWare, which he sold to Cognex, which Cognex has sold to Takata. Takata’s a tier-one Japanese electronics company, and exactly what Takata is doing with it, I’m not sure that I know, and the closer-- the parts which are very close to commercialization, they can’t tell me about.

Q: And you seem to trust this enough to, as you said, put your three-year-old son in front of it and wait for the vehicle to stop, even though there was obviously a person in there.

Chuck Thorpe: I could show you a videotape, or you could find it on YouTube, of the Navlab driving and Leland riding his bike in front of it. And yeah, there was a person inside with their foot on the brake pedal. At the time, we were running on fairly standard computers, and I’m not going to trust my son until it’s triply redundant, et cetera. But it was good for the TV camera. Do I trust this stuff? I trust it more and more, and I trust people less and less. The more I work in collision avoidance, the more screwy things that I see people do out there, and the more conservative a driver I get because I see the kinds of funny things that people do. I also see the kinds of funny situations that occur out there. I collect stories of strange things that people have found on the road, because if an automated vehicle is going to have to avoid obstacles, I need to know what kinds of obstacles are out there. What’s the strangest thing that you guys have seen on a road?

Q: A boat.

Chuck Thorpe: A boat? Now, what’s a boat doing on a road?

Q: It was in Kansas City. It fell off of a trailer.

Chuck Thorpe: Now, what’s a boat doing in Kansas City? But there’s an example. There’s something that, if you wrote the list of top hundred things that your automated car would have to avoid, “boat” would not probably be high on the list.

Q: I have another one which is something that happened to us, that we had rented a truck to go get some stuff from Ikea, and the lining, whoever had used it before apparently had not secured it, and on the highway, the lining blew out.

Chuck Thorpe: The lining of the truck?

Q: Yeah. So, in the bed of the truck, there was a huge plastic thing. So, yeah, that flew out. Thankfully, there was nobody behind us, but what you do in that situation, I don’t know. I saw a bunch of plastic chairs when I was driving out to New York in September.

Chuck Thorpe: Plastic chairs, if you ran into them, you’d-- probably be a lot of plastic flying all over the place and a very scared driver, but you probably would survive it. I was giving a talk in Rochester, New York, and I asked somebody the strangest thing they’d ever seen recently, and they said, “An alligator.” I said, “In Rochester, New York?” “No, no. This was down in Florida.” But you think about it. An alligator’s pretty dense and pretty low to the ground, and can move pretty fast, and it’s pretty hard to see. That’s not very good.

Q: There are deer all the time.

Chuck Thorpe: Deer all the time, yeah. Yeah. Not as often as people would have you believe. It turns out if you have a collision with a deer, it goes on your comprehensive insurance and not your collision, and so your insurance rates don’t go up. So a lot of people come into the body shop, “Ah, I’ve hit a deer.” And, well, that deer must’ve had red paint on it. But no, deer-- that’s a common one. Here’s an odd one: cats. You don’t think of a cat as a dangerous object, except people will say, “Aw, the poor cute cat,” and they swerve out of the way, and they run into somebody. One of the stranger things I heard about was a toilet. Again, it fell out of a car. But you think of a toilet as this ceramic object. Ceramics don’t reflect radar. That’s what they make stealth things out of. Your automated collision detection radar may not see it. It has this real shiny surface the laser may ping off, and the laser may not see it. If you have sun glinting off of it, your stereo vision system may have a hard time seeing it. This may be the perfect stealth obstacle. There are catalogs. One of the equipment companies wanted to build an automated trash-collecting system, so they commissioned this study to find out what kind of trash there is-- things like ladders. You run over a ladder, it spins, twists, it rips out the underside of your car. So, all of those kinds of things that you would have to deal with, those are the sorts of things that make this an interesting challenge.

Q: What kind of advice do you have for young people who might want to get into robotics?

Chuck Thorpe: Robots are cool. You certainly have to understand your math. You certainly have to understand your computer science. You certainly have to study little bits of engineering. But then you have to use your imagination. You have to be solid on the fundamentals, but there’s all kinds of applications for robots that I’ll never think of, so robots and something turn out to be powerful combinations: robots and art, robots and design, robots and medicine. In my case, it’s robots and driving. Robots and games. I was kind of surprised. A couple of my graduate students have gone off to the computer game industry, and they said, “Look, Chuck, it’s everything that you have on a real robot, with perception and intelligent planning and multi-agent coordination, except that we don’t have to get our hands dirty. It’s all simulated mud.” Robots and games, robots and toys-- those things are all fun. So use your imagination. Play. Study the basics-- get those down-- but keep your imagination alive.

Q: Great. Thank you very much.

Chuck Thorpe: One more piece of advice for young people: Come to Carnegie Mellon.

Q: Work on robot cars.

Chuck Thorpe: Work on robot-- yeah! Come with me. Work on robot cars. I’m serious about the Carnegie Mellon part in the following sense: I don’t know that our professors are any smarter than professors other place. The Berkeley professors are smart, the MIT professors are smart, the Stanford professors are smart, et cetera, et cetera. The difference is we have more of them, so if you know what you want to work on, and you really want to work on robot cockroaches, go to Berkeley because they have the world’s expert in robot cockroaches. If you want to work on robot driving, or mobile robots, or robot origami folding, or robots and elderly people, or-- come to Carnegie Mellon. Or if you don’t know what you want to work on, come to Carnegie Mellon. Berkeley has four really smart people doing robots; we have 60. MIT has people in a few different labs doing mobile robots. We have people in 50 different labs doing interesting robots. It’s a fun place to be, just because of the variety of people and the variety of projects doing robotics.

Q: Great. Anything else you want to add?

Chuck Thorpe: Thanks for doing this project. I think it’ll be really valuable.

Q: Thank you for participating. Do you have any suggestions for people, if you were doing this project, that would be a must that we should talk to? Who are the two highest at Berkeley?

Chuck Thorpe: Did you get Bob Full? He’s the--

Q: No.

Chuck Thorpe: He’s the cockroach guy.

Q: Okay.

Chuck Thorpe: And you got Ken Goldberg?

Q: We got Ken Goldberg.

Chuck Thorpe: Okay. So I say four because I don’t want to dis them by only saying three.

Q: Okay, so talk to the cockroach guy.

Chuck Thorpe: I think we got to. Yeah, yeah.

Q: Well, then there’s the two guys at the Richmond Field Station.

Chuck Thorpe: Yeah. Have you worked on underwater robots?

Q: Not yet. We’re looking at a little bit. I talked to Gino Marcovarario [ph?] while I was in Genoa over the summer, just because I was there, and we talked to Bob McGee [ph?], who’s done some _______.

Chuck Thorpe: Oh, good! He did the adaptive suspension vehicle.

Q: Mm-hmm.

Chuck Thorpe: That was a contemporary of the autonomous land vehicle. McGee was this towering, full professor, and I sat next to him at a conference. He said, “Oh, you’re Chuck Thorpe. I read your thesis.” And I thought, “Oh, no.”

Q: He was great.

Chuck Thorpe: Yes, yes. The underwater robot community has not communicated a lot with the land community, but there’re some very interesting stories to tell. Dick Lidberg [ph?], at University of New Hampshire-- if you see him, say hi. He was doing this experimental autonomous vehicle, EAVE, which he programmed the thrusters using binary, because they didn’t have assemblers and they had just invented the microprocessor. And he’s just finished running the 20th unmanned, untethered submersible vehicle conference. David Yerger, at Woods Hole-- he’s the robot guy who’s worked with Bob Ballard when Ballard went off to find the Titanic. And these guys have fascinating stories of finding geothermal vents on the mid-Atlantic ridge, and so forth.

Q: Isn’t there a group at the Monterey Bay Institute?

Chuck Thorpe: Hans Thomas and Jim Bellingham. Hans was an undergraduate student who worked with me. He’s one of the ones who made me feel old. He said, “Chuck, get out of the lab. You come into the lab and you hack, and it’s old-fashioned C code, and then you go off to a conference and we got to debug it.” And he came and found me 10 years later and said, “I am so sorry. Now I know just-- one of my students came and said, ‘Hans, get out of the lab because you have old-fashioned’-- and I realized exactly what I did to you.”

Q: It’s okay. At least now he knows it comes back to you-- karma.

Chuck Thorpe: Hans was just running underwater gliders around the Gulf of Mexico, looking for plumes of oil.

Q: Do you know who makes the robot that was actually cutting the pipes for the ___________________?

Chuck Thorpe: There’s been a big, tethered underwater vehicle, and they had four or five different companies’ robots down there. Oh, shoot. Perisubmersibles [ph?], Oceaneering [ph?]... I don’t know who the other one was, but there were several companies working down there.

Q: We can get that.

Chuck Thorpe: And those tend to be not terribly intelligent vehicles. All the intelligence is at the other end of the tether.

Q: Yeah.

Chuck Thorpe: But you still want some intelligence because you want to do things like station-keeping, so they’ll hover even with it being bounced around by the current so you can do fine manipulation.

Q: You can probably also get them ___________________.

Chuck Thorpe: Not nearly as much as you would think.

Q: Yeah.

Chuck Thorpe: Those are very simple arms down there. And the other guys who’ve done very, very simple arms are the bomb squad people, and that’s because they say, “Look, this is a bomb squad. We’re going to blow the arm off once a year, and we can’t afford to have a complicate arm. Just put the simplest possible dumb thing out there. Have you talked to Mark Rayberg [ph?]?

Q: No, not yet. He’s on our list.

Chuck Thorpe: Mark was a professor here for several years. I kept thinking they were doing demolition on the classroom right behind mine. It wasn’t a jackhammer; it was his hopping machine. Ka-thoonk, ka-thoonk, ka-thoonk, ka-thoonk, ka-thoonk!

Q: It was driving you crazy. Anybody else in the early autonomous vehicles that we might have missed?

Chuck Thorpe: Well, an interesting guy to talk to at Lockheed Martin is Jim Lowrie-- L-O-W-R-I-E. And you want to catch him in the next few months because he’s announced that he’s retiring so he can go sail around the world. But he was the project lead of the Autonomous Land Vehicle program, and his former boss, Roger Chappelle [ph?], you could also track down. Roger is semiretired and spends his time diving for gold and Spanish wrecks.

Q: Wow.

Chuck Thorpe: Yeah.

Q: That’s very adventurous.

Chuck Thorpe: Yeah.

Q: Who else would’ve been... I’m glad you got Bob-- [recording ends abruptly]

  • Coauthorship Network of C. Thrope
    Coauthorship Network of C. Thrope
  • Word Frequency in the works of C. Thrope
    Word Frequency in the works of C. Thrope