Gerd Hirzinger

Graduated from Technical University of Munich

Gerd Hirzinger

Gerd Hirzinger, director of the Institute of Robotics and Mechatronics at the German Aerospace Center in Wessling, Germany, is best known for his work in advancing robotic space exploration. He developed ROTEX, the first remotely controlled space robot that flew aboard the space shuttle Columbia in 1993. For many years he has been chairman of the German council on robot control and administrative committee member of the IEEE Society on Robotics and Automation. He has published more than 600 papers in robotics, mainly on robot sensing, sensory feedback, mechatronics, man-machine interfaces, telerobotics and space robotics.
Click here to download a pdf copy of this transcript
Transcript Text: 

Q: Just start by telling us where you were born, and where you grew up, and went to school.

Gerd Hirzinger: When I was a born in a small city in northern Bulgaria Schwandorf [ph?] and went to school then in another small city in northern Bulgaria, Amberg [ph?] And finally studied then at the Technical University of Munich, electronics. And got my diploma, degree in 1964. I was born in-- oh, excuse me, in 1969. I was born in 1945.

Q: And after 1969 did you go onto graduate school?

Gerd Hirzinger: No. I joined then DLR. I first started in an Institute where I was supposed to work on interfere [ph?] meter technologies, high frequency technologies. But after half a year I found out that control theory would be more interesting. That’s an area, which came up in Germany more strongly around that time. And then I saw there’s a vacant position in the Institute here, which at that time was an Institute for Control Technologies and Cybernetics. And I got this position first as a research person and I did my-- started my PhD then on digital control, which was a very new field at that time because what we called process computers were just arising at that time with the digital equipments PDP-11 and this type of machines. And, yeah, in ’74 I think I finished my PhD officially submitted at the Technical University of Munich again where I had studied already. And in 1976 I became then a leader of the automation group here in the Institute because it was the only one who real had-- really had the experience at that time with digital control with the first computers. I was working on satellite altitude control. We had an air-bearing table where I could simulate the reaction real technology, and thrusters for satellites, and so on. And the other part of my PhD was optimal digital control of instable airplanes, controlled, configured vehicles CCVs as they were called. So these were the two fields, which even today are main fields of DLR aerospace and aeronautics.

Q: And were the challenges for digital control of this kind of aeronautic systems?

Gerd Hirzinger: Well, my challenges were optimal control technologies, what is the best to control these type of systems, especially time optimal control. I focus on time optimal control with restrictions of actuator limits and so on. And another topic I was very much interested was the choice of an optimal sampling period because, of course, if you have enough computation of power you just sample faster, and faster, until you are near to a fully analogue system, which in a sense is the best one because you permanently observe the system. A digital controller gets dated [ph?] only every millisecond, 10 millisecond, 20 milliseconds. And my question was if I know the dynamics of the system of the plant, the control plant, what is an optimal sampling period that you adjust fast enough that you can say it doesn’t make sense to become faster, and waste computational time. I cannot really improve the behavior. But if I get slower with my sampling period, I maybe worth in my performance to find out these kind of limits. So I think had some practical relevance.

Q: And the first applications were satellites. Would you call those robotic systems or what would you consider the first robotic systems?

Gerd Hirzinger: No, they were not really robotic systems at that time. It was just a good field for early digital control investigations. But then-- it was around ’76, ’77 that in the Institute we had kind of a brainstorming, looking around what might be interesting applications for digitally controlled mechanical systems in the future that we could divert, maybe not focused only aerospace and astronautics, but what would be other interesting fields. And then we found out that manipulator technology. We thought about underwater manipulators. For a while, we thought about also prosthetic- prosthetic systems. And, indeed, then we tried to start with manipulators for underwater technologies for a short period. And then, yeah, there was especially when one of the department leaders of PMW visited us. And his name was Wolff Riezler [ph?] and he talked with us about robotics. PMW at that time had only a few robots. These were ABB robots. ABBs are still today one of the big robot manufacturers internationally. And I explained to him that our big interest in-- while entering into this field of robotics was the topic of the Institute that is feedback control. I told him that we think that robots in the future should be very much sensor controlled, sensitive feedback, and I especially talked about force control, and vision feedback. And I told him that in my opinion we would be optimally prepared to provide robots with these capabilities. And that I was convinced that it would have a chance to work there intensively in between five and ten years every robot might come out of a factory whether with force sensing and with vision capabilities. Today, I know that this was by far too optimistic. Even today most of the robots come out of the factory and go into the automotive factories without sensory perception and sensory behavior. So that-- as influence _______________ I’ve become very cautious with predictions for the future because I’m a little bit ashamed that we could not do more in order to push these things forward on a broad scale. Of course, there are improvements and I think in the late ‘70s, when ’78 or so we started to intensively develop force sensors but the-- but the reason indeed was that this group or department leader of PMW when he left our talk he said, “Hey, listen. I give you an ABB robot as a gift but you told me something very interesting. And we think this might be a wave for the future. We have only a few robots so far, but I will give you one as a gift.” And then we were very happy because an ABB robot-- we wouldn’t have been able to afford buying such a robot. And then we immediately started to get our first full-- develop a Cartesian transformation because this robot had only 4-bit microcontrollers. The first 4-bit microcontrollers had no floating point, and nothing, and had no Cartesian transformation. So you could control it only by commanding the joints separately. And so we started to enter into coordinate transformation. We provided the robot then with coordinate transformation, which is computationally difficult using hexadecimal code, very primitive technologies. And, yeah, finally we had it and then we immediately started to enter into this field of sensory feedback and behavior. We develop a force-torque sensor. I think the first force-torque sensor in Europe at least. We wanted to commercialize it already in the early ‘80s, was not a big success. It was too expensive at that time, but we were happy that first of all we had a dedicated electronic box, but still the problems was that when the sensor was mounted at the robot wrist, we had to transfer the signals wire-- wires until-- to the electronic box. Yeah, we had then already an 8-bit processor and we could do other computations there but the signals were so low we had a lot of disturbances. And it was only later I think it was already at the end of the ‘80s that we developed then the first force-torque sensors, which had all the electronics, integrated so there were no long wires and cables to transfer small signals. And this work at the end of the ‘80s was then already related a little bit towards our next big goal, namely, the first experiment in space robotics because we are in DLR the German aerospace research establishment and it was not in the early days, but in the ‘80s then clear that we should care about the idea of sending robots into space instead of astronauts. But I-- from looking back today I must say that the ‘80s I would say between 1980 and 1987 where we had not so much space technology in mind that in this period we did a lot of interesting things, which made us well known then in the community. One of the guys, very well known at that time discovered us, Mike Brady. I think it was around 1985 at the ___________________ in San Luis [ph?] when he-- when we gave a paper there on our ideas of working with robots, guiding the robot around by a force sensor, and all the force-feedback schemes. We had developed a man machine and the face control ball that we could command the robot into it if we [ph?] by our forces. And if the robot had a force sensor in addition and is sent, then we could exert forces in a dedicated way onto the environment. So everything was directed towards intuitive in direction with robotics. And I think it was already 1982 that then we really were then in a situation where we could buy a second ABB robot. The first one was a gift of PMW. We bought a second one. We provided all of these robots with force-torque sensors, 60 degree of freedom force-torque sensors. We controlled these two robots in a cooperative way so that they jointly grasp an object. And we controlled the motion of the sensor point of this object by our control ball from-- some distance. So cooperative robots with human interaction and all this kind of things. We took robots by their gripper and pushed it against some contour, which they had to follow and guide them around. And so all this smooth kind of thing, interaction, learning by showing that was done in the early ‘80s. And I think that’s-- this made us then when known around 1985 then I was also invited to one of the first international symposiums on robotic research in Goviller [ph?] in France because this International Federation of Robotic Research was a small group, which really cared for the most advanced robotic technologies. I think they were founded in ’83 in Bretton Woods in U.S. with Richard Paul and all the big names in robotics, Mike Brady. That was a very fruitful period although we were a small group. And then as I indicated around ’87 we thought there might be a chance that we go into space and we are as kind of a space institute so this might be important for the future. And indeed we were asked would you be ready to send something with a shuttle mission. A mission which was strongly connected to developments in Europe and Germany to so-called Spacelab D2 mission was supposed to fly with Shuttle Columbia. And the space people-- agency people told us their-- had some special problem. “We have a free rake. Would you be able to use this rake if you are-- if you are fast?” And I think there were only two years of time and I said, “Well, if we have the chance we will try to send something up, a small robot with force control, with sensory feedback and try to do remote control from ground.” Again, I have completely underestimated the difficulties of sending something in space is very difficult. And we had interesting discussions with NASA because we said, “We are fans of the mechatronic concept. What we would like to do is develop a multisensory robot grip and attach a rake in Spacelab in the cargo bay of the shuttle and make it running for us means the highest integration of electronics, mechanics, and computer science. So we have now microcontrollers. We want to build a grip. It should be a two-finger grip but we want to have a stiff force sensor with all the electronics integrated. The compliant force sensor based on optical technologies, nine laser range finders, tactile feedback. We want to have cameras and so on. And we had serious discussions with NASA because NASA said, “That’s not the standard we have in space. You should take out the electronics, put it into a box next to the mechanics. This is kind of standard in space technology.” And then we said that would frustrate us quite a bit in case we would be forced to do that, maybe we drop the whole experiment. And then we were very happy that NASA finally said, “Okay. No, we are interested in your experiment. Okay. We give you an exception permission. What you have to do then is to build a second grip of this type. First of all, you have to prove that it works under serious condition because we-- under space conditions. It’s not the outer space and it’s not the shuttle so it’s a little bit looser. But we have the vibrations and launch and so.” And because we had another problematic story we said, “We do not want to use the very expensive space proven components. We want to take electronic components in USA would say from Radio Shack.” We have small electronic shops here. “We want to test them. We want to vibrate them, radiation, and so on. But we want to make it with local stuff first.” Then finally NASA said, “Okay. Try it. You have to prove that nothing can happen, nothing serious can happen to the astronauts, no off gassing [ph?] and so on. You have to prove all that and you have to build a second grip and we do it in a storage bay so that if something does not work the astronauts can take it out and mount the spare system.” We agreed on all that and then there was another-- of course, a bad-- very bad story on the other side. It gave us more time to develop things. Time would have been too short from what I see today. We would not have managed it to bring this system up successfully in space but then the kind-- the Challenger disaster. And, of course, we all were shocked. But for our mission, the Spacelab D2 mission, was meant that launch was delayed even a few years. We had a longer time. The final launch then was only in ’93 and so we had longer time to prepare-- to prepare this experiment. But we put a lot of new technological things into the experiment. The remote control, the online remote control from ground despite of six to seven seconds delay. These long delays were caused by the fact that we had to go wire the antennas here up to a geostationary TDRS satellite down and go to spaceflight center up again to geostationary satellite down in White Sands, New Mexico. Then wire land communication to Houston. We were approximately try for computers in the loop. NASA told us no one before has tried to close such a control loop via the shuttle. And so the infrastructure is not really available. And it was clear that the delays would titter [ph?] and indeed they were tittering between five and seven seconds. But there were so many uncertainties. And indeed imagine that we could not test this loop as long as the shuttle was on ground, even not on the launch side. Only when the shuttle was in orbit, for the first time we were able to test this whole loop. So there were many, many uncertainties and risks in this experiment. But the operation mode of the experiment with us that in our idea we would wanted to prepare all the control modes, which we could perceive for the next 20 or 30 years. And I think the results-- the proof that we were right-- we tested the online control teleoperation from ground. We tested the-- not the automatic execution of tasks. It was clear that we would do that. We did a reprogramming from ground and send up then the execution of the task command wherever the shuttle was. And we did teleoperation by the astronauts locally. The astronauts had a small stereo-TV shuttle glasses. They could not look into the cabinet of the robot while they were working with the robot. They could control it with the control ball and also from ground we could control the robot with the control ball. But, of course, with six seconds delay there is not a real useful feedback while the eyes-- while the video camera images. So we had to prepare predictive computer simulation. With predictive simulation as long as the robot is in free motion, you can indeed compensate the delays but of course you cannot compensate these delays when the robot is in contact with the environment. But for these scenarios, for this situation we prepared all the technologies which we called shared autonomy, shared control. So, for example, when disconnecting-- or the disassembling of bayonet closure and remounting it on another place we sent up cross commands and-- but the robot did not just execute these cross commands but used the local sensory feedback-- the forth sensor to modify these cross commands and we call this shared autonomy. Not-- the operator has not the full autonomy and also passed planet [ph?] would not have the full autonomy but the local sensory feedback refines these cross commands. And then, for example, when the command from ground came up, turned the bayonet closure robot felt wide sensory feedback. If I just turning, I am jamming so what I have to do is a screwing operation apparently and so these cross commands of just turning are refined automatically in a screwing command. The robot really took out this bayonet closure. So I think all this new ideas and operational modes really worked well. And, of course, the biggest event and was the fully autonomous grasping of a free floating object. In this case, we could not bring a video system on board. Technology was not mature at that time. We took this as a challenge and said, “Okay. Then let’s send the video images down here.” We had a stereo camera. “And let the computers on ground estimate the motion, predict the motion six seconds ahead.” And then we had the predictive graphics display and we could decide do we want to grasp the robot by this predictive graphic display manually by the operator or should-- or by just pressing a special key? Should we let the system try to automatically grasp a certain object. And when the situation came it was very dramatic because the robot first took out this floating object of a fixture, gave it a kick, and then it flew around the small cabinet. Unfortunately, it flew behind the robot and I thought, “It will never come back,” because the time slots we had were only 20 minutes and then after the 20 minutes other people needed the video line and so on. It’s only 20 minutes. And I think we waited a quarter of an hour and the small particle did not come-- was not visible again and in the last minutes only, it drifted again into the reach area of the arm and I knew we would only have one chance. I think it was one minute before the line was closed, the video line. It came in the proximity of the grip and then we were very nervous because it was three o’clock or four o’clock in the morning, everyone was very tired. And then we said, “Let the system do it alone. Maybe it fails but if it does not fail it would be a big event.” And indeed the computers really estimated the motion perfectly, decided to grasp, predict six seconds ahead, send up the command, and it grasped successfully. You cannot image what kind of holler there was in the control room. In that second I knew this was pioneering work, no one wanted to believe that. The astronauts did not believe it. No one wanted to believe that. And I must admit we had a lot of good luck. I-- even myself did not believe that this would work. I’ve been talking too long now.

Q: So who are some of the people that worked on that project with you?

Gerd Hirzinger: Operators on this--

Q: Well, here in the Institute, I think the most important person in my field over all the years in this area of space robot technology and project has been Mr. ____________________ who is, for us, he is Mr. Space robotics because he kept all the collections to the industry and knows all the irregularities, all the-- now what do you call that? The regulations and the formalities and but it’s also an excellent information technology engineer and the-- he keeps the group together. And from outside at that time people from Astrium [ph?] Settlemier [ph?] who is not in the-- in a leading position at Austria _______________________ was very important. Professor Tickmanns [ph?] in the vision part. This dynamic vision was technology we used here. Yeah, and these were important contributors. Other industries like Astrium Bremen [ph?] they helped-- there was interesting cooperation. What made me sad a little bit at that time was that I had-- I still have a good friend who- who was one of the pioneers in telerobotics, Tony Paige [ph?] from Jet Propulsion Laboratory and he had applied to NASA for joining us with his first reflecting hand controller. So in addition to our control ball, which the astronauts used on-- in the space shuttle to control the robot, he wanted to bring his first reflecting hand controller so that they could compare our control ball as the first transfuser and the first command tool, not only velocity command but does not reflect the force. Tony Paige he wanted to experiment what is the behavior of astronauts if they have force reflection in space instead of gravity? Finally this proposal was not granted and even today this makes me really a little bit sad. By the way there was another event, which gave us some delay not so much delay as the Challenger disaster did. But when the launch was coming-- Tony Paige, by the way came over her from JPL [ph?] to watch it in our laboratory and to watch all the robotic experiments. And that was one of the-- of the I would say very well or famous launches of shuttle because I think one second before launch there was already vapor coming out of the rocket. The launch was stopped-- was interrupted. I think it was one second before and everyone was so disappointed. I think I was the only one who thought this good luck. We have now another few weeks in order to do additional stuff to it and especially for the grasping of the floating- floating object. We improved the software considerably in this time and I think this was really-- this was really the key point that we had finally success-- that we had another few weeks in order to improve things because it’s so difficult to simulate on ground in reality, a free floating object. We had no way to simulate this really. We had a linear actuator that we could move such small particle and a small stick and but not really drift-- let it drift. We could move it a little bit rotationally and linearly but it was not drifting in space. So it was so difficult to test the original algorithms. But, yeah, this is life had a few weeks of extra time.

Q: Were there any other projects that you collaborated with JPL or ___________________ on subsequently?

Gerd Hirzinger: Not really official collaboration, not really official. We always kept close contacts. Even today we have contacts to-- now the planetary exploration people, Bryan Wilcox-- I was much-- very much involved in the development of the earth led [ph?], stepping move system or planetary system with wheels on the legs. Richard or Dick __________________ and all this very experienced people with JPL but Tony Paige he-- for me, was the-- together with Tom Schafferton [ph?] the pioneer in teleoperation and force reflecting teleoperation all these concepts. They can afford-- carry this work on and later in Seattle I think and there were others who were influenced by this work. But, yeah, Tony Paige he really. He had some success. I think he could bring some force sensor onto the shuttle manipulator and test it there but concerning the force reflection from space he could not manage to bring something up and, as I said, this makes me a little bit sad. We-- our next experiments will be bringing a force reflecting joystick up to the space station and together with either the colleagues from the European Space Agency try to control a system on ground from space but with the kind of _________________ of force reflection.

Q: What were the big challenges in the early design in the force-torque sensor that you had to overcome?

Gerd Hirzinger: We did many constructions of it-- many constructions, designs. They were, yeah, kinectmatic [ph?] or structural designs-- how such a force sensor look like, with some bars and ______________ edge technologies. There were early designs in US. I think Victor Scheinman maybe was the first one-- or one of the first who developed such a force sensor. I think even until now things have not changed too much. We had-- make many trials. How can we maybe de-couple the force and torques so that the sensors naturally de-couple by the arrangement of this-- the bars, and strokes, and so on, and stokes [ph?]. And-- but it’s not quite easy and one of the problems with the force sensors always has been they have reasonable range in terms of forces but the range is too small in terms of torques because you have-- you have certain distance or load-- you have a certain distance, which is typically for a tool of a robot and then typically you get fairly high torques compared to the forces. So this is-- this a major problem but of course a lot of progress has been made. As I mentioned already _____________ has been brought into the mechanics and mechatronic way so there are no big disturbances of the signals are digitized very quickly and the signals are much cleaner than we had them in the early days. And there are two American companies who are fabricating the sensors. They have some business success. I think it’s still not the big numbers. It’s still not the situations that all the robots come out of a factory, the force sensing. But I think there is progress that’s going on.

Q: Where did you come to the idea for the life robotics?

Gerd Hirzinger: That came-- and that started with these ROTEX experiment. That was the first remotely controlled space robot, which I discussed about in the cabinet of the Spacelab D2 mission. Because we thought about in such a project, as I said, the time scale was very difficult. When we got the offer to send something up it was clear that we would not have much time so the question for us was, “What can we do by ourselves and what do we have to leave to others, to partners, industry partners?” And then we decided that we wanted to make the multi-sensory grip by ourselves completely. And we decided to develop the whole tele-programming, teleoperation, ground control, all the autonomy and shared autonomy concepts by ourselves. The only thing we really gave to the industry was the construction of the small arm and that impressed us. There’s not a big accusement [ph?] of industry but they did their best but we were surprised that the arm was small. I don’t remember exactly but less than one meter and yet it weighed around 40 kilos. The problem was that the company tried to use fairly weak motors in order to save weight because in zero gravity you don’t have to compensate the gravity on earth. So the motors were fairly weak and with the result, however, that the arm, which weighed around 40 kilo still weighed 40 kilo could not sustain itself on ground. So it had to be suspended by springs and that made it very difficult to do a ground training because the reach ability and maneuverability was very small. And so I think it was around ’87 that we said to ourselves, “This cannot be the story for the future. We have to build a lightweight arm, which should be the same on ground and in space. And yet on ground should be capable of carrying it’s own weight, not breaking down. We should not use strings to suspend it. It should be very strong and we should have the possibility to simulate the zero gravity behavior.” That-- it was clear that we should be able to exert torques in the joints, which should just compensate the gravity so that if you push it, it should fly away as in space. So this was the first idea for developing a torque-controlled lightweight robot. And that was the one motivation. The other one was that in these early days, in 1985 when I was invited to the International Symposium on Robotic Research, I had the first close contact to Oussama Khatib and I was very impressed by his ideas which more or less were in the direction that robotics of the future should not be so much focused on just prescribing a position in Cartesian space and then translate it into the joint as placed and just endure the motion of the robot, but I felt that his approach was so fascinating because he tried to go back to the roots, asking what is the basis of a motion and the basis of any motion in space is the acceleration. By integrating acceleration you get velocity and by integrating velocity you get the position. So by setting the roots into acceleration you have everything. You’re prescribed acceleration in space and by prescribing this acceleration then you need the mass metrics of the arm like Newton’s Law and then you get the force and torque vector at the gripper which you need if the robot would be an absolute loose system which you can move around and then you get the force and torque which you need here and then by the transposed Jacobian you can calculate what are the torques which you need in the joints. So you really go back from the basic idea of acceleration, we go back to the torques and the joints. But with a classical robot you cannot exert torques because torques are to a major part eaten up by the friction effects in the gearings. So you would need a torque sensor and torque feedback and I think Oussama Khatib was the first one who recognized that made experiments with Dr. Stewart and Fisher [ph?] which came from Switzerland and these ideas impressed me very much and I thought we should try to build a robot which can realize all these modern concepts and so I wanted to combine the needs from space, it’s ultra lightweight and another need from space was minimal power consumption. That was also one of the drivers for the lightweight robot and the other driver was the torque control concepts of Khatib and all these more theoretical ideas behind. And indeed, the first version of a lightweight robot is hanging on the ceiling. Was finished in time for the astronaut training. So in ’92 when the astronauts were trained, the first arm could be used. It could sustain itself on ground without springs, but it was not yet perfect and it had not really good torque until at that time. We had made some construction errors. We had integrated gearings with very, very high reduction. They were small. Very, very high reduction, 1 to 600, but it turned out that they were not fabricable in a reliable way. The tolerance needs were too high in order to fabricate them in a reasonable way. Nevertheless, the arm was used for the astronaut training. So is it all a dream? At least for the astronaut training to have an arm which could more or less do all these motions without having strings hanging around. This goal was achieved at that time.

Q: When was that completed?

Gerd Hirzinger: I think this first arm was completed around ’92, maybe ’91. The astronaut training, the mission was in April ’93, so the training was one year before, so I think ’92 were these tests with the first lightweight robot. Then we went on. The first version of the lightweight robot maybe was finished five or six years later. This version then had several modifications. I think we went to harmonic drives then which are elastic drawings, but we had then reasonable torque control already and so we could compensate for these vibrations caused by elastities and so on. We had still classical motors, motors we could buy from the shelf. The arm did not look so nicely yet. You could squeeze your fingers there and these kind of things and I think at the beginning of the new century then the first year was successful in building up a first version of the third generation which then was completed, made again of carbon fiber, but no longer with the grid structure but shell structure and with our own motors, what you call the robo drive motors. They were optimized with maybe two years of simulation, optimized. Maybe they were the first robotic motors which really were systematically developed for robotic applications. So far robot manufactures have used the best available motors for their robots. We asked ourselves what is the typical need for a robot motor and we said, well, one need is high dynamics. We do not need the high velocities which the typical motors have. We do not have a few thousand revolutions per minute. No robot is moving at this rate. What we need is extremely high dynamics, low velocities, and yeah, low weight of course, high torque, and low power consumption, and in particular an operational mode which consists of permanently inverting the motion back and forth, permanently back and forth, and we put all this into an optimization criteria, but I think it took us two years until we then had a motor where we said okay, this design tries to optimize our criteria and meanwhile we have founded a small company out of that, building these motors in series and they’re quite successful. This was the key element for the new lightweight robot, the number three.

Q: What’s the company name ?

Gerd Hirzinger: The company’s name is Robo Drive. It’s still not a big company, but they have many inquiries from all abroad meanwhile and so far they have focused mainly on building the lightweight robot systems and corporations out of Kukov [ph?] which has licensed the arm a couple of years ago and is doing work for the resource labs mainly at the moment, but now also starting into automotive companies and so on.

Q: And who are the people that started that company? Were they people who were working in your lab?

Gerd Hirzinger: Yeah, they are. It’s kind of a model which we are realizing here. Technology transfer is not simple. We made the experience that it’s extremely difficult if you just apply a patent, get the patent. You have the technology and the _______ and you find the company. The company leader is enthusiastic and you say okay, you get a license, you have a description of the technology, try to make something out of it. The company leader maybe is a fan of the technology, but then he gives the order to his developers. It often happened that the developer, the chief of development says, “Well, these people from research institute, maybe they have not so much experience. We from industry, we have to take this in account and this one,” and often they try to reinvent the wheel. It’s difficult for them to identify with something which they do not have developed by themselves. It comes from outside. What we prove is other people are also quite clever. It’s psychologically not easy. I think that we made the best experience with another model where we first founded or found a spinoff company where the person or the persons who have developed this institute, they found the company, stay at least part time in the institute and care for the company until the company has at least a certain size. This transferred via the heads from my present view seems to be maybe the better one. And so we here and Robert and I think it’s the same model.

Q: Have there been other spinoff companies that have come out of this ?

Gerd Hirzinger: Yeah, there’s another company Sensa Drive [ph?] and they have focused a little bit on the torque control sensors. Not the motors, but the torque control and they are working on force feedback technologies. They are developing heptic interfaces and so on. These are not big companies, but they’re doing quite well. They founded other companies which immediately related very closely to KUKA for example. They are meanwhile real part of KUKA. There are no old companies any longer. Years ago, but that’s already a long time back we have founded a company, my company which has developed also jointly with the institute six degree of freedom hand controller. We controlled to these first space robot as I said with the controlled ball and then the industrial or commercial version of this controller was for so-called space mouse NUS. At the same time there was another company that developed the space ball. We had some patent conflicts at that time, but finally Logitech took over both developments and unified them in their own company and for a number of years it was Seth Brown of Logitech and the space mouse today I think is the most popular 3D man machine interface especially wide use in computer graphics and design and construction automotive companies. I think there is more than a million systems in operation.

Q: Great. And who were some of the individuals who started these spinoff companies, their names?

Gerd Hirzinger: The one who very much drove this space command or the space mouse company was Ben Cumbert [ph?], the mechanical engineer who was very much involved in the gripper of this ROTEX experiment as well as the electronic engineer who developed the space mouse technology was Johannes Dietrich who at the same time was one of the leading electronic engineers for the ROTEX gripper. He got Ryan Lezza [ph?] originally was a little bit conservative with respect to this mechatronic concept, all the electronics and the gripper. Johannes Dietrich later on got the Judith Resnik award for achievements in space electronics. Judith Resnik was one of the female astronauts who died in the Challenger disaster and Frank Asempt [ph?] was the computer science engineer for the space mouse. Concerning the sensor drive company who develops torque sensors and heptic devices, Norbert Spura [ph?] was the driver and founder of that company and concerning the motor company Robo Drive, Manfred Sheedle [ph?] is the head of this company. In earlier years other companies came out of the institute. I think it’s already 20 years back that body simulation company came out, the simpac. I don’t know how they are called today. They changed their name, but the simpac product is very successful in body dynamics and modeling especially in the train area, also in the automotive area.

Q: So you also have PhD students working here from various universities? And who are some of the PhD students that you’ve trained over the years and where have they gone off to to teach or do research? Any ones that stick out?

Gerd Hirzinger: Yeah. I think a few of them are now in a position that they can push things forward in a responsible position. For example, one of the PhD students who worked in the early days of the lightweight robot on the motor control and the control concepts was Alexander Verl, V-E-R-L. He’s now head of one of the biggest automation and robotics institutes in Germany, the Fraunhofer Institute for Production and Automation in Stuttgart. This is just very much industry oriented, so I think what we are doing is different from them, but he’s chief of this very big institute. Another person who had some industrial career is Dr. Kuppa [ph?]. Over a number of years he has been our development leader of KUKA. For a new KUKA robot generation. Also for the licensing of the lightweight robot. Dr. Kuppa is now development leader of a new KUKA branch of the KUKA labs which is kind of a new branch, an advanced robotics, service robotics in KUKA. There is a professor at the technical University of Hanover, Dr. Whatmeyer [ph?]. He was one of the key figures in our medical robot development, just to mention three which became fairly well known outside.

Q: Who was your own PhD advisor? Who were some of your teachers when you were at ?

Gerd Hirzinger: I think he’s also one of the very good names. It was Professor Gunter Schmidt at the Technical University. He did interesting work in the area of mobile robotics. He was an excellent control theory expert and in September we will have the 50th anniversary of this institute of Technical University and I will have to give a talk and he will also be there. So he’s really one of the very good names in that area. During his period, laser scanners were developed. One laser scanner which is by the research assistant _______ at that time is now commercialized by Lika [ph?]. Lika is a very well known company in that area.

Q: And were there other researchers who were working at DLR when you first came here that you consider your mentors or teachers?

Gerd Hirzinger: Yeah. There was one when I grew up. When I came here to DLR in that control theory group as I mentioned, that was one of the strong groups at that time, and that was the leading control theory group in Germany. I think other new theories which just came up in US, they were coming into Germany via this group because nearly everyone in this group flew to US and studied there in Berkeley, typically in Berkeley, and one of them was Eckhard Freund, F-R-E-U-N-D, and he was a very clever control theorist and he got an offer then to go to UCLA as teaching professor and I stayed by the state I think two years. I shared the room with him and his special field was control anticoupling of multi-body systems, of multi-input systems. So systems with multi variable control systems. Systems which have a number of control inputs and well complicated output, complicated behavior and he taught me-- well I have developed so much of a theory and I want to apply that and my idea would be to apply this to robots. Robots were very fresh at that time. No one really knew much about them, but Freund, that is a topic of decoupling because you want to control the single activators and yet you want to have certain motion to come out. We need to apply this technology to robotic systems. Well, I was not so much experienced and I said, “Well, I will concentrate on this little control unusual to this more complicated things,” and then he was as I said offered a position at UCLA. I think stayed one or two then also one year at the European Space Agency where he had to do other things and then he got the offer by another country, Bavaria, another immense failure. It was offered to build up an institute for robotics in Dortmund and we always kept in contact and he had an assistant, I think it was Mr. Mainer [ph?], but that’s not so crucial here, but then he called me one day and said, “Hello, Gerd. You mentioned I always told you that I would try these things to robotics. We have now interesting talks with VW,” and you must know that that was I think at the end of the ‘70s and at that time VW started to be the only big robot manufacturer in Germany. Amazing. They wanted to build up robot lines for their own factory, not so much selling robots, but they were aiming at the factory without people. They had heard something about Japanese factories and so on and did not take too long, but he said, “Gerd,” and he called me and said, “Gerd, imagine we have now a big contract from VW. We are supposed to develop the control system for the VW robots.” Yeah and then I think that was the first big robot event in Germany, a complete whole more or less fabricating cars by robot use and the control system came from Eckhard Freund and so with long roots back to the institute and the nice story about that is that Eckhard Freund was then also very well known in US and he was one of the founders of this International Federation of Robotic Research which is still active today and by the next symposium which unfortunately I cannot attend is in Flagstaff end of August and he was one of the founders with Richard Poole and Mike Brady of this International Federation of Robotic Research. So this closes the whole loop.

Q: He still around?

Gerd Hirzinger: He was several times in the reviewers of our institute. He had founded his own company and all of a sudden he was hunting and when coming back from one of his hunting events, he died in his car. I think it was three or four years ago. I think he was not yet 65. He was a little bit younger. So dramatic. All of a sudden died and he was so vivid and so active and it’s really a big shame, but this is how life works.

Q: So the developments with KUKA, did you have any direct relations with KUKA or were these sort of industrial relations through these spinoff companies?

Gerd Hirzinger: Maybe I should mention two things, if you allow. First one, we tried to pursue the space robotics line of course. That was important and immediately and it happened all in the ‘90s on one side pursuing the space robotics line and the next exciting experiment was in ’99 where the Japanese colleagues, especially Dr. Oda [ph?] who is today at Tokyo University who was a project leader and the Japanese wanted to send up the first free-flying robot in space. A robot standing on a satellite be controlled from ground and Dr. Oda and his team was invited by us in ’93. They were observers here in the control rooms. They could observe everything. They were very grateful and they invited us then to join their experiment. They built it by their own, built the robot by their own, sent it up in ’97, and in ’99 we went to Sokuba [ph?] with our ground control station and we were allowed to control this arm from ground. We did beautiful experiments again preprogramming and reprogramming from ground sensor based operations preprogrammed in virtual environment on ground, but maybe one of the most exciting experiments was maybe the first demonstration of the dynamic interaction between the robot and the satellite. So with the robot on the satellite you can just move around. We are talking about swimming motions like a cat is waving with her tail and thus the robot can reorient the whole satellite. It’s just the principle of reaction just if you jump out of a small boat, the boat goes to the other side. That is the same reaction principle and you can’t do attitude control. The Japanese were very nice and switched off their _______ control system and we were allowed to reorient the whole satellite by just waving around with the arm. That was a very interesting and also successful experiment, but at the same time in the mid of the ‘90s we were slowly looking at the industry and KUKA to us was still a small robot manufacturer, smaller than VW, but they were near to us here and they were just buying out the control books which they had formally got from Siemens. They wanted to build up their own controller and there was a basic decision at KUKA where the decision was should we try to set up our own big team in control theory and modeling and simulation or should we rely on the Duron [ph?] Institute which had the reputation that it was a leading control institute in Germany? I think similar decisions had to be made in ABB which I think at that time was by far the leading European robot manufacturer. ABB decided to set up fairly big control team in their own company and KUKA and the person of Mr. Liebert [ph?] decided to rely on our institute. I think it was risky because if a research institute, well, if the head changes, if I would have left or if conditions change, maybe the interest would have been less, but he felt that we were really interested to help here and it was hard work and what we had promised or tried to promise was that also inspired by all this work of Oussama Khatib. We said we should be possible to compute the complete dynamics of a robot system which is very complicated. We think we have the technologies to compute this model in real time and invert it so that for a desired motion that we really can command the currents in the motors so that the robot exerts this motion and especially that for example it damps the vibrations, that they are no longer so big vibrations and that was the first big task that the robot decides by itself how much it can accelerate on a path which is taught before how much it can accelerate along this path without overloading the joints. So far the programmers had to find it out in a tedious work they sometimes needed 14 days to find out how fast it could move on a certain segment of a path without overloading the joints and the idea was the robot should know by himself how much he could accelerate without overloading the joints, more or less in real time. No one wanted to believe that this is possible, that the Pentium processors came up around this time, ’94 or so, and I think middle of the ‘90s we had the solution. We could show that the robots really were able to do these kind of things. There was a test installation at BMW and I think within 14 days the complete software at BMW was exchanged by the new software including the robot dynamics. So that was a big step and I think at the end of the ‘90s KUKA won every competition beat at Mercedes beat at Ford in USA. I remember when competitors flew in the engineers, but I think there was really an enormous state kind of reflection state which KUKA had at that time. Of course later on the other robot manufacturers came along with comparable technologies, but a lot of people say in KUKA has said that this was the key point for their rise up to number three in the world which they are presumably today three or four or something like that. And since then we have indeed a very strong relation with KUKA. It’s not an exclusive one, but it’s an actual one and we are doing projects together. They have licensed the lightweight robot and other technologies they’re interested in enhanced grippers and other technologies, vision and so on.

Q: Has the VW continued their research into robotics or did they outsource that now?

Gerd Hirzinger: VW then switched over to they stopped their own robot development. They switched over mainly to KUKA. The reason they had a big order I think part of the robots will be delivered by final so they are although not exclusively bound and I think they still have been considering whether they should start up with own robot production again, but on the other side it’s clear that the manufacturers have so much experience and it would be difficult to do all this work again and maybe compete with robot manufacturers which are not doing anything else and manufacturing robots.

Q: And you’ve already mentioned a few that Mercedes and BMW have also sort of collaborated with KUKA. Have they ever tried to start their own robotics research--

Gerd Hirzinger: No, as far as I see, the only company partly due to the history they have in that field has always been VW in thinking about producing their own robots. There are also some of the former people that have outsourced to a small company. They’re producing some robots for a small section for specialized application, but not in an big series.

Q: And have you done any collaborations with researchers at other automotive industries as far as ?

Gerd Hirzinger: Yeah, we had in these early days which I mentioned which were very exciting where KUKA had not yet really got own robots that was especially in the ‘80s we had for example a remarkable collaboration with Mercedes, also with BMW. At the Hanover fair in 1978 already we demonstrated the assembly of an oil pump that was a task that came out of BMW and we did this with our first forced torque sensor. The assembly of this oil pump went maybe 30 times slowly than the human workers did and I thought that in a few years we are doing this at least as fast as the human workers do, but this is one of the disappointments I had to realize and these kind of things are today coming back to us at automotive companies say, “Hey, you have now the new lightweight robot. Could you assemble these oil pumps automatically?” We had project at Daimler, very exciting project for example closing the holes in the car bodies by some rubber plug-ins which the human workers do with their human thumb. We had developed an inductive sensor which automatically which sucks up these rubber parts and seeks the hole inductively and then presses it in. I think it’s still not realized in this way. We had used at that time robots of the Siemens store to Manotech [ph?]. That’s also quite an interesting story. Siemens had decided to set up an own robotic branch. The company was called Manotech and I think the first sensory interface was offered by Manotech and the first sensory industrial robot sensory feedback was developed by us in cooperation with Siemens, especially these inductive sensors into a rubber part plug-ins to coffins, but after a couple of years, Siemens gave up the production of robots. Siemens is still discussing whether this was a good decision and whether they might enter the field again in one of the other ways. You know that there are robot vacuum cleaners and things in automotive mobility systems and in logistics and so I think Siemens is still basically interested in all these things, but so far that decision has been not to enter again into the classical robotics area. We have done other interesting applications in these early days like cutting rubber away from car steering wheels with an automotive supplier which fabricated 10,000 of the steering wheels per day and where the women were sitting there with knives and cutting of this rubber apart and I don’t know whether this is done today by robots, but I see that many of the things are coming back now and are not perfectly solved so far. It’s a little bit frustrating to admit.

Q: And the robotics work a crown offers, the ones that started and that was one of your students have you done collaborations with ____________?

Gerd Hirzinger: We always had good contacts. There was an interesting story about that. When we started robotics in '75 or '76 Firmhofer Schugart [ph?] was already a name in that field. I think at that time Professor Varnicker [ph?], who later on became president of the Firmhofer Society, so the head of the whole Firmhofer association, he was the institute leader at that time, and there were a few other bigger names in Germany. Well, Professor Froind [ph?] was already known at that time, and, yeah, in Berlin Professor Shpoor [ph?] that was also at Firmhofer Institute for production, and that in Auoon [ph?] there were big institutes for automation. So there were big names and more production-oriented, not so much towards development of new robot systems. And of course we thought "Oh, we are very, very small," and I really was very worried when my director, who afterwards promoted us and me very much-- he was a long-term president of DLR's board of directors, Professor Krull [ph?], and he became a good friend I think of Daniel Godin [ph?]. And when he came into DLR I think that was around-- that was late already, I think around '85 or so, '86. He invited all these bigger names in German robot and automation, and he said "Come to Oberpfaffenhofen. I would like to hear your opinion about what this group is doing here." And to my not surprise, but I was really pleased that we got very positive comments. Also these colleagues knew that we were the newcomers. We had other ideas. We had wanted to do more sensing, more new type of robots and so on, and they knew that maybe we would be competitors, but they were very fair and said "Yeah, this is a good group. They are doing good work." And this helped us quite a lot, because then Professor Krull, he pushed much forward these space robot technologies. He always was in favor of our work. When after this robotics experiment I got the Leibniz Award, which is the highest scientific award in Germany, which gave us one and a half million euro for free use-- that is, we had to ask nobody what we are doing with the money-- this was the reason why we could start medical robotics, which is another topic in DLR. We started in the early '90s then the development of an artificial heart and many other things. So that was really good that our big competitors really said "This is good work. We have to be honest here" and that Professor Krull then promoted us. Sometimes in life these kind of things are of course important. What concerns by the way then the year 20000 and later, just to mention the space robotic field-- around 2005 we started our third space robot experiment, which was again today I can say very interesting and also very successful. We sent up a small arm, only two joints, into the outer site of the international space station on the Russian part of the outer skin of the space station. This project was called Rokviss. It's a small arm which stands in the background, this gray thing. It's a mockup. It's a laboratory model. It has a stick at the end, and it could move with a stick along a contour which was also directly next to the robot. It was supposed to draw springs in that contour. It could follow the contour automatically because it used our lightweight robot drives and joints, including the sensory control and torque control, especially torque control. And the torques were sent down, so we could feel them in a force-reflecting hand controller, a joystick as the kids are using. The nice thing here is that we saw that the infrastructure in space is still not suitable for doing online teleoperation. If you go by Houston or something like that until you are on the shuttle it's a too long time. So we did this experiment with teleoperation just when the shuttle flew over us. In this case you have only 300 kilometers. You have only a coverage time of seven to eight minutes, then the shuttle disappears again, but when the shuttle is visible you have only 300 kilometers, and then you have only a delay of 20 milliseconds, very, very little. And there's no difference between the situation when you control the arm here. The feeling is the same as if you're controlling it on the space station. And this we have done now over five years, nearly six years, until in May by our request the astronauts disassembled the small arm, brought it inside the shuttle and flew it back to Earth I think in May. It came down to Earth, landing in Kazakhstan. It's now in Russia in Saint Petersburg, and in a few weeks we will have it here and we'll look what happened to it in six years in free space. There are only few technical systems which have been in space over six years and which come back to Earth, and we'll investigate very precisely what happened to the gearings, to the electronics. How does the system look like? I heard stories from the space station that there are a lot of hits of small particles, that the space station looks not very good from outside, and we are very curious how this small arm looks like. But, as I said, we wanted to test on one side our robot joints, that they withstand the environmental conditions over six years, and on the other side we wanted to demonstrate the concept of telepresence. Telepresence means I want to have the feeling as if I would be there, but I cannot be there, because it's too long distance, it's too cold or it's too dangerous concerning radiation. This is a very general principle. And our story is this principle is the same which we have to use or apply in modern surgery. And so this is one of the other interesting topics in our institute, surgical robotics. You know that a group came out of NASA in the early '90s. They joined then Stanford Research Institute and later on founded the company Intuitive Surgical, and they are now a market leader with their Intuitive system in surgery robots. And this is exactly the concept of telepresence. You are sitting a few meters only next to the patient, but you want to operate in an area which is not directly accessible if you do not want to cut up further body. You want to work inside the body. And at the moment they are using only stereovision feedback, but we think that haptic or force feedback would also be very interesting to have the feeling how stiff is the tissue. Therefore we're working on an own system, the MiroSurge system, which is fairly small and which tries to realize all these additional features like force feedback. But the basic concept is the telepresence concept.

Q: Is that a similar motivation for designing this system behind you as a telepresence?

Gerd Hirzinger: You mean the system here. The system here is more a standalone platform, more aiming at autonomy. But on the other side we have shown several times that there's also a nice kind of application where we cut it just here, fix it as an upper body somewhere and then control the arms via telepresence via stereo feedback via the eyes here and with other hand controllers to control them. We are using the same lightweight robots as hand controllers, because they have force-control capabilities. They can feedback the forces which are sensed by the arms, can feedback the forces into the arm, which they use as a controller. This is what we did, and now the Robinout [ph?] system, which is kind of an upper body, is at the space station. We have cooperation with colleagues from the European Space Agency, Dr. Andre Shille [ph?]. He stayed with us half a year, and he has built a special exoskeleton, which is a little bit even more natural, but on the other side it's not quite easy to wear such a thing. And his interest is to bring it to a certain part of the space station, maybe to the Columbus module, and then let the astronauts control the Robinout arm under zero gravity, so that comes back to the experiments which Tony Baichi [ph?] wanted to make with force feedback many years ago. Andre Shille wants to bring the exoskeleton up and then control the Robinout arm but also controlling something on ground, typically the Robinout system at Houston or just in upper body. So there are indeed, yeah, ideas to control the arms also via telepresence.

Q: When did development start on the wield [ph?] platform, and what was the motivation?

Gerd Hirzinger: Maybe that was five years ago or something like that. I cannot exactly say. Five years, six years ago. I first wondered whether it would look good to build-up a humanoid upper body with the arms, because originally they were not designed for that. And of course when we are talking about service assistance in household I always say that of course the arms and the upper body has to be smaller finally, but people understand that these are development sequences and technology is going on. We found a lot of interest with our new developments, the so-called variable stiffness approaches with an integrated hand-arm system, which is no longer bigger than the human hand-arm system, which is no longer heavier than the human hand-arm system, which has approximately the same dynamics and approximately the same grasping forces. And it's a more complex system because we are following the bionic approaches here, and we need two motors for each joint, similar to the human muscles where we have , always two actuators for one joint. But by this you can realize this effect that with a lightweight robot also it reacts very quickly, and we made amazing experiments where-- maybe you have seen the video where it hits a human head and yet it stops immediately, so it's not dangerous at all. Nevertheless, you need a few milliseconds until the sensor has registrated there is a resistance, stop and go back, so it needs a little bit time. When we humans hit the table this is different. There is not first the sensing of some aches and bringing it to the brain, and the brain says "Withdraw the fingers," but they're inherently compliant mechanically, and this is the principle of this new variable-stiffness control concept and the new arm and hand systems, which are smaller. And I think we will be able to build-up an upper body which is not bigger than the human one and which is comparably strong and comparably dynamic.

Q: What do you see as the major applications for robotics in the next five or 10 years?

Gerd Hirzinger: That's difficult to say. I clearly see from our cooperations with car manufacturers that a new generation of robots like the lightweight robot-- it need not be the lightweight robot, but type of this robot characterized by the soft robot experiment-- that this type of robotics with all their capabilities-- so the robot can make itself stiff, can make itself compliant, has these sensing capabilities. If it is connected with a camera then it can look "Where do I have to assemble something?" and then do the assembly. And I see that there is a big field in assembly in the car companies, for example. So far the industrial robots are good in spot welding, in parts welding, in handling parts, but they have not been good in assembly where they do dedicated operations, where they have to adjust tolerances. The car manufacturers, they are feeding the parts very precisely because robots have no adaptivity. They are not flexible really. And the cost of feeding parts is higher than the robot itself, so this will end. In the next years I think robots will have more sensory capabilities, but on the other side if we say okay then provide the classical industrial robots with force sensor from a present point of view I think this is not the solution, because the classic robots if they have a force sensor the force signals have to go via the position interface and the dynamics which you can reach, and the capability and the performance is not comparable with that of the human arm with its flexibility and compliance and adaptivity. But it can be imitated by the lightweight robot technologies, because these arms can make themself very compliant. And when you see them assemble something they are trembling a little bit, and then all of a sudden the part is fed in. And there are car companies who have tested that, and it's amazing which kind of success they have in their tests in assembling gearings where the robots are shaking a little bit and just the part is in. Amazing. And this is even more than we have seen in the past. So I think production assistance will become a big story in the next year, also arms, as I said, which are not big, which are not so heavy, which are supporting human workers on a mobile system, for example, while driving around or fetching parts or are assembling something, much more flexible assembly than it was possible to do up to now. And, well, there have been a lot of predictions for service robotics in houses. Companies like iRobot have shown that there is a big market, especially for vacuum cleaners. I see also in the gardens a lot of robots moving around cutting the grass, and I think these areas will grow. There will be some time to go until robots will be really helpful in households. Also I say there will be a lot of people in a few years which have enough money to buy a Mercedes, but it doesn't make sense anymore to buy a Mercedes, and then they will buy a service robot, people who are lying in bed and who at least should be able to command the system to fetch some drink or go to the refrigerator and take out some prepared meal, open it and not cooking in the ultimate sense but just preparing some meal. These tasks are not so difficult. Nevertheless, you need safety. Of course you need reliability and some kind of adaptivity and intelligence, and we see that the progress is very slow. So I keep away from promising something. I think that within the next 10 years these systems should be available for simple tasks, not for replacing elderly care, not replacing humans. I think humans will always be necessary and important to talk with people lying in bed, so no way to replace that by robots. But for the boring tasks overnight, any time at night to fetch something or so, the system should be helpful. This is my prediction, but I see at least 10 years until the first really helpful systems will be available. Maybe I'm too pessimistic at the moment, but in the past all the predictions have been too optimistic.

Q: What do you see as the biggest technical hurdles or things that are holding back that kind of development within robotics?

Gerd Hirzinger: They are really manifold or at least two-fold. We had the problem in the past that on the one side of course we tried to focus on mechatronic design because we said "These arms are so stiff and difficult to control, and you will never have some allegiant and complaint behavior and things which we do." The classical robotics was always focused on being stiff, being very precise. Even if robots cannot guarantee some absolute precision they could guarantee some repeating or repeatability of precision, and then they were calibrating it by cameras. And I always have said it's not useful to calibrate such a system with cameras if now the hardware-- it has to feed in something. It's not in its place like a car frame. It's on a little bit other place. What does it help if the robot is calibrated? But what is interesting is only the relative position, so we need sensors, we need sensory feedback. This is the topic. And therefore I think in my opinion the most important requirements always have been provide robots with much more sensory capabilities, make them smaller, make them mechatronically nearly perfect. Solve this side of the problem. But this is not the only part. The other one is really vision. Look at the problems in vision. We still have a lot of problems if the lighting, the illumination changes. If you are on a fair, the sun is coming in, vision does no longer work. Robust vision would be very important. Real-time 3D vision is important. Low-cost force sensing, maybe tactile sensing, collision avoidance and all these kind, and generalization in combination with perception. A door handle should be recognized if it is similar to door handles that have been seen before and so on. Mobile manipulation. So the perception and cognition part is in the same way important, but I always say you make an error if you just focus on one side. There have been years where we have the artificial intelligence community promise "We will solve every problem. If there's a little bit mechatronics, we can solve that later on." It's available more or less. We have to do logistics and AI. And then the robot community, those who were working on the real robots, were frustrated and said "Well, the AI people are promising too much." I think both communities-- and I see them growing together now a little bit more than in the past. We need both areas, the mechatronic one and the software and intelligence and cognition one. Only if they both cooperate in an optimal way we will get out the robots which we'll need in the future.

Q: Do you have a definition of robotics? Do you see it as a distinct field?

Gerd Hirzinger: Robotics in my opinion definitely is a subfield, maybe one of the most interesting and challenging subfields of mechatronics. Only this closest integration of mechanics, optics, electronics and computer science or computer computational power and information processing-- only this closest integration makes up the robots of the future. And therefore we tried, for example, to integrate all the electronics in the arm. They have small cable length and so on. Also with the medical robot everything is in the arm. People are so surprised if they see the big systems, which are the one with big boxes, and they say "Can it really be that everything is in the arm like in the human system?" I think this is important, and this is the mechatronics, but this does not yet make the intelligence. It's only one part of the story, but no side should forget the other part. They are equally important.

Q: For young people who are interested in a career in robotics what kind of advice do you have?

Gerd Hirzinger: I would advise-- and, by the way, to finish your question, there are other interesting mechatronic systems. For example, I say the cars of the future, they are mechatronic systems. They try to avoid an accident. In fact, however, they are also becoming robots, mobile robots, not the manipulation robots, but they are also becoming type of robots. So maybe the story is that all mechatronic systems tend to be kind of intelligent mechanisms, be it classical robots, be it automotive cars or be it airplanes and unmanned aerial vehicles. Everything is more or less robotic. But what was your last question?

Q: The advice for young people that are interested in a career in robotics.

Gerd Hirzinger: I often get the question "What is the best way to study mechatronics?" I say it's very difficult to overlook this area in the same depth in all fields. Maybe it's not possible. Of course mechanical engineering is a important basis. I recommend that they really go deeper in one field but try to get a deep understanding of the other fields. I give for an example in my lectures at the Technical University-- they are in the area of computer science. Also, I am not a born computer science guy, but I tell them what I'm missing. Maybe it's not a specialty of Munich. Also other technical universities have the problem. I say the education in computer science often is not really prepared for mechatronic systems, because the informatics guys, they are good in logistics and languages and network designs and network behavior, but they are always accustomed to systems where you have to wait a little bit, be it Internet or be it a computer network. You have some priorities, and things are connected to each other, and you get some response, and if the response is reasonably fast you think it's real-time. What we need in control and especially in robotics is dedicated sampling rates every millisecond in our robot joints. Every few microseconds there must be a control algorithm which runs. It cannot wait until Annette says "Well, unfortunately I'm a little bit busy now. I'll send something later on." And this is what many of the computer science people do not understand, and they often do not hear enough lectures about the description of dynamic systems. Some of them have not seen a real differential equation. Some of them have very little understanding of what mechanical inertia is, what it means to actuate a motor, when such a motor responds how fast the sampling rate should be in order to control this perfectly. And that sampling rate should be absolutely reliable. These are the deficits some of the computer science people have. And so it would be very good to have this basic understanding everywhere and then focus in one of the areas, but you need to understand the other fields, maybe make some practical work first or so. You need not be the ultimate expert, but you should have a deep understanding. That's my recommendation.

Q: Is there anything else you'd like to add?

Gerd Hirzinger: Oh, I think we have-- yeah, I could mention just a comment I have. Part of the institute in Berlin, the optical systems, and I see also how closely they are related to our mechatronics work. For example, we are modeling the world in 3D by flying with airplanes and cameras, and these cameras are then normally mounted on a stabilization platform which compensates the motion of the airplane, so you have inertial sensors, you have actuators, you have the cameras, and then you have to compute 3D models out of the camera images, so everything comes together. All in mechatronic mechanics, optics, electronics and computer science is together there. And even the experiment which I mentioned which we did in Japan with these satellite control with the robot arm comes back to us at the moment, because maybe you have seen in Internet there was a cooperation with Digital Globe modeling Mount Everest from space. And we did that with a small company here, 3D RealityMaps, and Digital Globe made a big press release with that. And the pictures were taken from the WorldView-2 satellite. And typically if one orders pictures for 3D modeling the satellite has to be reoriented first to look there, and then this way when it comes over, and then the other way. And this motion of the satellite typically is caused by control moment gyros, which rotate with a few thousand rotations, and they cause vibrations, and this is bad for the pictures. So what people would like to have is a mechanism which orients the satellite but then there is no vibrating mechanism. And this is what you can do with a robot arm, and while we try to make it simpler, we make such an experiment. We will use our robot actuators but only three of them, not a full robot, but actuators which rotate around zero velocity but just exert the torques, as in a robot joint. And we will take the robot actuators these robot-drive motors which I mentioned. So all these things come together here, including the development of an artificial heart, where we also use the small lightweight robot actuators. In an artificial heart what you need is low weight, minimal power consumption, and beating heart can be done by just, yeah, inverse motion or something like that. But you have the same challenges as in robotics, so it's very general concepts, mechatronics.

Q: Is the artificial heart project within DLR or a spin-off company?

Gerd Hirzinger: Yeah. Oh yeah. I forgot that. That's a remarkable spin-off company recently. There are already 30 people that are in that company, Dualis, and they have strong relations to US groups. They get their venture capital at the moment from US. Yeah. They are joining US companies that are also active in the area of artificial hearts. That's, oh yeah, good that you mentioned that.

Q: What's the name of the company?

Gerd Hirzinger: The company here is Dualis.

Q: Who's in charge?

Gerd Hirzinger: Dr. Thomas Schmid.

Q: What was it like working with Ernst Dickmanns?

Gerd Hirzinger: Well, when he was here I was too young at that time, and in DLR he was specialist on optimal maneuvers and paths for rockets and reentry vehicles and, yeah, orbital maneuvers and orbital path planning was his topic. I think only when he went to the Bundeswehr he went into the field of image processing. But his approach was much inspired by control and dynamics. The dynamic vision which he created was based on the fact that there is a dynamic model and that you can estimate this model by common filtering or so. You can estimate the motion. Nothing happens unexpectedly in nature and technology. You have always a dynamic motion in a car or a satellite. He did experiments with satellite control and so on. It's always based on dynamic models, so that's what I liked. It was deeply inspired by this institute, which later on or already at that time was called the Institute for Flight System Dynamics. That was the original name of our institute until '92 I think or something like that. And, as I said, he was then very much interested in this robot experiment in space, and actively his son collaborated then with us. His son was just finishing his studies then, and he has another doctoral student. I forgot his name. And they worked with us on that project of the free flyer. And then on the car we had not a real-- at that time we did not work on autonomous cars. We are today working on robotic cars. By the way, yeah, I forgot to mention one thing. We have so far been talking only on arms and hands. Of course we are also very much interested in planetary rovers, and the last years we have intensively been working on-- supposedly it was the first European project for Mars, Mars rover, the XL Mars mission. I'm a little bit sad that at the moment it looks like that the mission will not be flying. In the last years there have been discussions with NASA to make a joint mission out of it to fly a NASA rover and a European rover. We have been working, as I said, on the European rover. We have provided the engineering model with our motors, but it looks at the moment that there will be only one rover, and probably or maybe it will be only a NASA rover like the Mars Science Laboratory, something like that. That will be a very pity, and I will be very unhappy, and I know that the JPL people have some interest to collaborate with us and maybe that we would do the navigation and path planning and control, and they would develop the mobility and the rover itself. It's not decided yet. And on the other side, Germany's a little bit focusing on moon, so we are studying mobility on moon. And there's also the Google Lunar X Prize. Maybe you have heard of that. And we have joined one of the teams who wants to fly privately to moon. That would be big fun for us. But we are, yeah, intensively working on rover designs at the moment.

Q: Which team is that?

Gerd Hirzinger: They're called Part-Time Scientists, PTS. But of course there are a lot of inadequacies or uncertain things, financiers you need to work with. This kind of thing, you need a lot of money.

Q: Anything else you'd like to add?

Gerd Hirzinger: No. No, I think we have spoken about everything really.

  • Coauthorship Network of G. Hirzinger
    Coauthorship Network of G. Hirzinger
  • Events in the Life of G. Hirzinger
    Events in the Life of G. Hirzinger
  • Word Frequency in the works of G. Hirzinger
    Word Frequency in the works of G. Hirzinger