IBM's Watson, a super-computer best known as a Jeopardy-winning robot, might not exist if not for Dr. David Ferrucci. When asked to lead the Watson project in 2007, Ferrucci—who leads the Semantic Analysis and Integration Department at IBM's T.J. Watson's Research Center—took a collaborative approach, teaching his team to act like a start-up within IBM. He recently spoke with Inc.com's Christine Lagorio about fostering innovation within a mature company, leading diverse workgroups, and what exactly Watson doesn't know. (Hint: Don't ask him about the plot of Wuthering Heights).
What kind of the future applications are there for Watson that might be interesting for small businesses?
There are lots of opportunities in general for businesses that need to do a better job at leveraging information that's buried and captured in normally-structured content. How do we dig into documents and understand the evidence and how it supports our information in a more precise way, and ultimately, in a more natural way? There's an enormous amount of information out there that I'm not getting to, and we call that "recall blindness." In other words, I'm blind to all this other stuff that might not be tagged with the same keywords or show up on the first search page. Getting smarter with technology that analyzes and understands natural language content is the motivation, and I think there's huge business opportunities for small business because when you think about the whole knowledge management problem, it all hinges on figuring out how to do that better.
You need this ecosystem of developers who can go in there and understand how best to apply Watson, how to build the right interfaces, how to customize and optimize the capability to solve the target company's information needs. I think it's going to open a door to having companies look at the potential—there's traditional databases on one side, there's keyword search on the other—but right in this middle, in this sweet spot where people are struggling to get greater precision to get greater breadth in answering information needs, the engine has been so hard to replicate, and that's only part of it. If a small business can figure out how to bridge the gap, I think there's a huge opportunity there.
You've said Watson was a huge investment for IBM. Tell me about the process of developing Watson.
It's been about four years, and on average, about 20 to 25 people per year as an investment in the core technology. There were other parts of IBM that helped us do other things to finally deliver Watson to play the Jeopardy game. And of course, IBM brought other hardware resources to bear. But in addition to the size of the investment, it was an extraordinarily unique opportunity. We had to adapt the culture a bit to work on it.
How isolated were you? It's almost as if you had a small start-up inside of the larger company.
Typically, the people in research are working on ideas a little bit slowly, at the pace of traditional scientific research. But here we were under the gun to take the technology to the next level in a very rapid way. When I put together the team, the focus was on performance and a rapid innovation cycle. It also required a fairly sizable investment in hardware infrastructure. We had to use lots of hardware, thousands and thousands of computer dedicated to running our weekly data error tests, and every two weeks we generated 20 gigabytes of error analysis data. Then we would have Web-based tools what we'd use to slice and dice that data so we could understand what happened.
The other interesting thing about how we managed this project internally was very much like an entrepreneurial environment. We were at a large and small scale. We were left as this team, you go out there, you have this goal, you gotta do it and do it within a particular time frame; not only that, we had to report quarterly, so the senior VP was ready to say "I'm sorry, you're going to slow, it's over." There was that aspect of the project that was under a tremendous amount of pressure to deliver quickly and to do something nobody had done before—and quickly.
Did you ever sit back and say, "I'm acting like a CEO here?"
For me, this is what we had to do. I never thought of myself as much as a CEO as I did a principal investigator; in other words, what ideas do we pursue, what ideas seem to be valuable, which ones don't? I would involve myself in the ideas, I'd iterate them, I'd extend them myself, I'd think about how best to do this. I think of myself more as a researcher, but I think the magnitude of the problem, the time pressure and the expectations put me in the position to drive innovation in a more rapid cycle, and that's when you start acting like a CEO, because you're managing investments with tremendous pressure around you.
What leadership lessons did you learn along the way?
I think one of the things I anticipated and validated, and therefore learned in this, was the importance of diversity of skill and talent. It was so critical for this particular project because we had computational linguists, had software engineers, software architects, people steeped in classic AI knowledge, representation and reasoning, we had statistical machine learning people, game theory people, natural language processing people, parsing people, information retrieval people. Lots of different perspectives on the problem were absolutely critical.
While there had to be strong leadership on the top to decide what ideas you wanted to pursue, you had to keep an open mind and allow all that input to come in. You needed all those different perspectives; without them, if we sat there and said: "There's only one way to do this," we would've absolutely failed. I had that instinct that the solution would be a hybrid solution of many different technologies. I remember starting the project, we had lots of varied opinions, a lot of people thought it was impossible, and didn't want it to start. They wanted to kill it right at the beginning. They thought it was a waste of time and money. That's because they thought it was a quest to find a silver bullet or magic formula. It wasn't; there was instead a broad spectrum of innovation we realized and integrated to solve this problem.
How did the Jeopardy idea come about?
An executive was having dinner at some restaurant and he was familiar with our research. And Ken Jennings was on television, this was during his big winning streak around 2004. He looked up and thought "Could you get a computer to play Jeopardy? Gee, that'd capture people's imaginations."
How'd you end up with the project in your hands?
I took a few people and did a feasibility study, so I could pitch it to Paul Horn, a senior vice president at IBM. I did convince myself it was feasible, but I didn't think we'd be winning against grand champions, but at the time, that wasn't the goal. I thought it could be done. It was an irresistible vision. To me, it was exactly what we needed in this field. We need a problem like this to really advance the science. I pitched it, and Paul Horn said: "Let's go ahead."
The way I see it, we have no choice but to say yes to this challenge, and that convinced a few people, and we did then take it on. Then Paul Horn left IBM and John Kelly took his place, and I had to re-pitch the whole thing. At the end, John raised the stakes. At the beginning of the meeting, he didn't want it, but by the end, he said, "okay, I like this, we can do it, but if we do it we gotta do it to win." He talked about the brand, and the impact it might have on the brand, so the pressure really got turned up quite a bit.
What was the benefit toward individual members of your team unit after the first Jeopardy win?
The members of the team, I can't tell you how happy they were with this project, for so many reasons. First of all, it brought an overwhelming sense of accomplishment. And an opportunity to bring so much attention to themselves and their own work, knowing that when they write papers, people are now going to be so interested in reading them. This is part of why people go into research or investor research: They want to be able to push the limits, but they want to be able to push the limits in a way that makes a difference to business. That's why you go into business. That's it in a nutshell. They could've worked their entire careers and never been able to get that type of accomplishment. I think that made them feel really, really great. Made them feel they had a once-in-a-lifetime opportunity and they didn't squander it.
What about the process of getting there?
I think the other thing they're really happy with is the teaming that went on. We put everybody in one room. At the beginning of the project, the bandwidth of communication was much too slow. People would come into my office and I'd say "How're you doing on that idea?" and they'd say "I didn't get very far." "Why?" "I couldn't meet with so and so..." So we cut that all out. I said: "We're all going to get into one room, and when you have a problem, you stand up and speak." We did that, we all got into one room—which is not typically how research is used to working here—and the result was a closeness and a deep appreciation the team members had for themselves and each other and for a working dynamic that was so much more efficient that would've been otherwise.
Speaking of efficiency, are there any subjects in particular you've noticed that Watson is bad at?
Any questions that require you to break the question down into parts, solve one part over here and one part over there, and then combine the evidence or combine the information. What else was hard is when a question requires a global context. A question like, "In this book, how many times did the protagonist argue with their mate?" It's interesting because you have to read the whole book and you have to understand what it means to have an argument, which can vary dramatically how you define that, but then moreover, you have to have this global context, because you have to be able to count how many times that happened. Even worse, even more difficult is, "This kind of event happened more than three times in this book." Lots of events may have happened more than three times, so why this one? Those are difficult because they require you to have a contextual understanding and a human perspective on the data.
HCL TECHNOLOGIES HEWLETTPACKARD HIGH TECH COMPUTER HON HAI PRECISION IND HYNIX SEMICONDUCTOR
No comments:
Post a Comment