No French language stuff in this post–it’s a response to a question that I get fairly often from people who are thinking about leaving graduate programs in natural language processing or speech technology.
I was recently asked: What’s it like to work in the speech and/or language technology industries with a master’s degree? To understand the question, you have to realize that the alternative would be to work in the speech and/or language technology industry with a PhD. People in these fields will typically have one graduate degree or another (or both)–the question relates to differences in how you will experience the industry world depending on which of the two you have. To understand the context of my answer, it’s probably helpful to know that I got a master’s degree, went into industry, went back into academia with a master’s degree and became a researcher, and then got a PhD. So, I have a little bit of familiarity with academia and with industry, as well as with the experience of working when you have a PhD versus working when you don’t.
The short answer: there isn’t necessarily any difference between what it’s like to work in industry with a master’s degree and what it’s like to work in industry with a PhD. If you can take a problem and come up with an answer by yourself, and the answer works; if you can identify useful new problems; and if you can propose, implement, and design the evaluation for some project yourself, then you’re going to be treated like a PhD, and if you negotiate reasonably well, you’re going to get paid like one. Now, the same is actually true in academia, the primary difference being that with a master’s degree, you’re very likely to be working in someone else’s lab and writing grants for someone else, versus having your own lab and writing grants for yourself. (I did this for quite a while, and I loved it.) This is also true of working in a private think tank like MITRE or BBN, except that you can write your own grants in places like that, at least for internal funding–external agencies like NSF and NIH are not very likely to fund you if you don’t have a PhD.
A longer answer: in industry, having a master’s degree versus having a PhD is likely to affect the position into which you are first hired, but it won’t necessarily have an impact on what your job is like. Companies that do speech and/or language processing work are accustomed to using the level of your graduate degree as an indicator of how likely you are to be able to do independent work when they’re considering hiring you, but once you’ve talked your way into a job, they generally care much more about the results that you do (or don’t) produce than they do about your academic pedigree. A friend whose husband tried to get a PhD in computer science, failed his comps, and then went into the robotics industry–a good example of something that’s very oriented towards writing code and towards engineering, but also very much an area where there is a need to be able to do real research in addition to development, described her ex’s experience like this. He is a guy who did quite well in industry, eventually starting his own company, which he sold for “major bank,” as the kids say (the expression means “a lot of money”).
His fear was that he’d never be able to be the principal investigator on a grant (still the case , I believe) and that he wouldn’t be taken seriously. I think that was the case initially, too…certainly in terms of starting position and compensation. He had to prove himself once in the door and build credibility that was conveyed more automatically with a PhD (though I’m sure a PhD could lose that credibility if they didn’t prove worthy ). At this point, I don’t think he’s in any different place than he would have been with a PhD. He just had to have more to offer to get those initial doors open than he would have with a PhD.
A master’s degree won’t keep you from rising through the ranks of the industry world, and a PhD won’t keep you from getting fired–I’ve certainly seen both of these happen.
You’ll find that what you learnt in graduate school can be super-helpful in industry. The opposite is also true–the time that I spent in industry was the best thing that ever happened to my academic career. Another opposite is true, too–the things that you learn in grad school can hurt you in industry. I’ll give you some examples of both.
Things you learn in graduate school that can be helpful in industry: the ability to define a problem, state it clearly, figure out how to evaluate it, and communicate what you’ve done are super-useful in industry (and in life in general!). In fact, I suspect that a lot of my success in industry (to the extent that I had it–objectively, I have been offered a promotion in every industry job I’ve ever had, which I guess is a sort of success) was related to the fact that if I had an idea, I could communicate it more clearly than pretty much everyone else–not because I’m smarter than anyone else, but because I have a degree in English (among other things), and in the process of getting that degree, I learnt to write (reasonably) clearly, and quickly. All other things being equal, a well-crafted email will generally trump a crappily written email.
Things you learn in graduate school that will hurt you in industry: this may be very culture-specific, but the picture that I got as a student in my linguistics graduate program was that you need to stake out a position and then defend it to the death, and you don’t get to change that position very often. Again, this may be specific to the culture of linguistics, or even just specific to the culture of linguistics at the time when I was in graduate school–certainly the community around Chomsky and the subfield of linguistics in which he specialized (syntax) is pretty notorious for brutal fights around theoretical issues. Be clear that this is an attitude that will hurt you in industry. Theory can be important in industry–often a company has a stake in some particular kind of approach to a problem, at least in very broad terms–but, theoretical purity per se is not typically valued in industry. In fact, probably the opposite is true–the path to success in industry lies more in humility about your ideas and a willingness to seriously consider the other person’s take on things than it does in defending whatever your take on a question happens to be. Industries that don’t do this get overthrown, companies that don’t do this fail, and engineers that don’t do this get fired.
From the perspective of quite a few years of doing natural language processing and computational linguistics in an environment that has a hell of a lot more physicians and biologists in it than it has computational people, I’m starting to wonder if this isn’t just a matter of cultural mores, but of differences in philosophy of science. The classic linguist’s philosophy of science is that of Thomas Kuhn, where the conception (if I understand it correctly, which is not a given) is that science advances when the old ideas collapse under the weight of their clearly stupid inadequacies, and the new ideas succeed by being brilliant, and right, and new–even sui generis. In contrast, you could say that the industrial world is underlyingly more influenced by the philosophy of science of Karl Popper, where the idea (again, if I understand it correctly, and again, that’s not a given) is that science proceeds only by falsifiability. On this model, you should be happy if your hypothesis is not supported, because now you really know something, and you can move forward. I’m not claiming that this happens universally in daily life in industry–you bet you will run into people who will get pissed if they’re questioned about the approach that they’re taking to something, or if the testers uncover a bug in their implementation, or whatever. But, underlyingly, in industry you want someone to find your problems–before the customer does. I may be overthinking the issue with respect to my philosophy of science explanation–Chris Brew, who knows both the academic and the industry computational linguistics and language processing worlds very well and has been a long-term mentor of mine, sees it like this:
I’m not totally clear on the “why” of the bad thing that linguistics graduate school teaches you. There is certainly a mismatch in culture, and humility and listening is rewarded in industrial settings. But I think the central factor is not philosophy of science but willingness to step into the other person’s shoes and awareness that their perfectly valid priorities may not be your priorities. Laser-like focus on a personal agenda also sucks in academia, but is more likely to be tolerated if you are somehow brilliant and high-status. “Not a team player” is the standard industry complaint about ex-academics.
Things you learn in industry that will help you in graduate school: see the entire preceding section. As far as I can tell, picking a position and sticking to it come hell or high water won’t actually help you do better science in academia any more than it will help you build good software in industry. A little humility about your ideas can go a really long way towards helping you understand whatever it is that your science is about understanding. There are really practical things that industry will help you with once you get back to graduate school, too (or even if you don’t). One of these is deadlines. Industry and academia are both pretty deadline-driven, and that was never apparent to me in graduate school, or at least not until it was too late. Some time in industry helped me understand the role of deadlines, and also helped me develop methods for making sure that I never missed them–methods that worked for me, at any rate. Another practical thing is the importance of all of those things that they teach you in software engineering classes–documenting your code, testing your code, defining requirements at the beginning of a project (or having a solid plan for how you’ll do it iteratively throughout the project, if you do something like the Agile method of software development), testing your code, taking usability issues very seriously, and testing your code. For me, thinking a lot about testing my code led to me thinking a lot about how similar the theoretical bases of software testing are to the theoretical bases of linguistics, and that ultimately led to a bunch of publications on the subject and to me doing my dissertation on approaching software testing as a problem in descriptive linguistics.
Another important lesson to take back to academia from your time in industry is the importance of edge cases and the phenomena in the “long tail.” You probably know the joke about what happens if you ask a phonetician, a phonologist, and a syntactician if all odd numbers are prime–the phonologist says “one is an odd number and one is prime, 3 is an odd number and 3 is prime, 5 is an odd number and 5 is prime, 7 is an odd number and 7 is prime, 9 is an odd number and–9 is not prime, but if we say that it’s not prime, then we miss the generalization that a lot of odd numbers are prime, so let’s just assume that 9 is prime.” In linguistics, we tend to like generalizations, and generalizations by their nature tend to cover most of the data points in question, not just a few of the data points in question, or they wouldn’t be generalizations. The infrequent phenomena that don’t fit our analysis but that don’t seem to be very primary, we tend to leave to the side, at least in non-empirical approaches to linguistics. This does not fly in the industry world, ever. You have to take care of every case, and that includes the special cases, and if you need some special code for them, some special ad hoc solution for them, then that’s just the way it is. I don’t care if your F-measure is 0.98 or your word error rate is not statistically signicantly different from zero–show your product to a potential customer, and the first thing they’re going to do is to ask it to run on their name, or the name of their company, or their birthday, and if that doesn’t work, then you will be shown the door. I’ve seen this over and over–in industry, you have to account for everything.
It turns out that this is a good attitude to take back to academia with you. I’m reminded of an anecdote involving the phonologist Michael Broe. I once saw him give him a talk that was focussed on an analysis of “regularly irregular” forms in some language or another–that is, things that are irregular, but that are irregular in a way that is similar to the way that some other things are irregular. Think about mouse/mice and louse/lice in English, or the very few French verbs in the ir class that take the same present indicative inflectional morphemes as er verbs, or what have you. Michael was asked why he was bothering to work on these regularly irregular forms when they’re so uncommon in the language that he was interested in. I’ve never forgotten his answer: It depends on what you think the goal of phonology is–is the goal of phonology to understand patterns in sound systems, or is the goal of phonology to understand frequent patterns in sound systems? Many people in my field have some story like this: a physician or a biologist asks you to build a system to do something or other. You build one, and you have an amazingly high F-measure, or an amazingly low word error rate, or whatever. You proudly demo your system for the physician or the biologist. On the entire computer screen, there are tons of correct outputs, and one fucking error. The physician or biologist points at the error, shrugs, and says, OK–let me know when it works. If you want your research to actually fix some problems in the world, and you want those solutions to be taken seriously by the people who might actually have a use for them, then you need to think about those edge cases, those exceptions, those rare events–even if taking care of them means sacrificing some purity of theory or some elegance of design. Those edge cases, “exceptions,” and rare events are perhaps the things that uncover the flaw in your theory, and you should be very happy to come across them.
This little essay started with a very specific question: What’s it like to work in the speech and/or language technology industries with a master’s degree? I want to generalize it a bit, and answer a more general question: What’s it like to work in industry with a master’s degree? The short answer to this more general question is the same as the short answer to the more general question: which graduate degree you have doesn’t necessarily make any difference with respect to what it’s like to work in industry. However, the long answer is somewhat different. This is probably counter-intuitive to academics, but in industry in general, it can actually be easier to get a job with a master’s degree than with a PhD. Now, I’m talking here specifically about high-tech industries where you’re basically being hired to write computer programs of one sort or another, versus ones where you’re being hired to do things that are closer to the research and development end of the continuum of high-tech jobs. In these environments, having a master’s degree or not isn’t necessarily considered important, one way or the other–they want to know if you have the technical skills that they need and whether or not they think you’ll be non-painful to work with, and that’s about it. On the other hand, having a PhD is often not looked on kindly by private companies. Your potential future co-workers may be–often are, in my experience–quite suspicious of people with doctorates, suspecting that they might be strong on theory but weak on implementation, and–worse–rigid defenders of whatever their position happens to be, unwilling to seriously consider alternate approaches. I didn’t just make up the phenomena that I describe for several paragraphs above! Now, this is not true of the speech and/or language technology industries, where there’s a long tradition of the interesting, innovative, and successful work coming from PhDs. But, in the broader industrial world, the skepticism-about-PhDs phenomenon is widespread.