More Tech Stars Skip College, at Least for a While

(p. B1) The college dropout-turned-entrepreneur is a staple of Silicon Valley mythology. Steve Jobs, Bill Gates and Mark Zuckerberg all left college.
In their day, those founders were very unusual. But a lot has changed since 2005, when Mr. Zuckerberg left Harvard. The new crop of dropouts has grown up with the Internet and smartphones. The tools to create new technology are more accessible. The cost to start a company has plunged, while the options for raising money have multiplied.
Moreover, the path isn’t as lonely.
. . .
Not long ago, dropping out of school to start a company was considered risky. For this generation, it is a badge of honor, evidence of ambition and focus. Very few dropouts become tycoons, but “failure” today often means going back to school or taking a six-figure job at a big tech company.
. . .
(p. B5) There are no hard numbers on the dropout trend, but applicants for the Thiel Fellowship tripled in the most recent year; the fellowship won’t disclose numbers.
. . .
It has tapped 82 fellows in the past five years.
“I don’t think college is always bad, but our society seems to think college is always good, for everyone, at any cost–and that is what we have to question,” says Mr. Thiel, a co-founder of PayPal and an early investor in Facebook.
Of the 43 fellows in the initial classes of 2011 and 2012, 26 didn’t return to school and continued to work on startups or independent projects. Five went to work for large tech firms, including a few through acquisitions. The remaining 12 went back to school.
Mr. Thiel says companies started by the fellows have raised $73 million, a record that he says has attracted additional applicants. He says fellows “learned far more than they would have in college.”

For the full story, see:
DAISUKE WAKABAYASHI. “College Dropouts Thrive in Tech.” The Wall Street Journal (Thurs., June 4, 2015): B1 & B10.
(Note: ellipses added. The phrase “the fellowship won’t disclose numbers” was in the online, but not the print, version of the article.)
(Note: the online version of the article has the date June 3, 2015, and has the title “College Dropouts Thrive in Tech.”)

Cultural and Institutional Differences Between Europe and U.S. Keep Europe from Having a Silicon Valley

(p. B7) “They all want a Silicon Valley,” Jacob Kirkegaard, a Danish economist and senior fellow at the Peterson Institute for International Economics, told me this week. “But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
Petra Moser, assistant professor of economics at Stanford and its Europe Center, who was born in Germany, agreed that “Europeans are worried.”
“They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.”
. . .
There is . . . little or no stigma in Silicon Valley to being fired; Steve Jobs himself was forced out of Apple. “American companies allow their employees to leave and try something else,” Professor Moser said. “Then, if it works, great, the mother company acquires the start-up. If it doesn’t, they hire them back. It’s a great system. It allows people to experiment and try things. In Germany, you can’t do that. People would hold it against you. They’d see it as disloyal. It’s a very different ethic.”
Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook, Mr. Kirkegaard said.
He cited the example of Uber, the ride-hailing service that despite its German-sounding name is a thoroughly American upstart. Uber has been greeted in Europe like the arrival of a virus, and its reception says a lot about the power of incumbent taxi operators.
“But it goes deeper than that,” Mr. Kirkegaard said. “New Yorkers don’t get all nostalgic about yellow cabs. In London, the black cab is seen as something that makes London what it is. People like it that way. Americans tend to act in a more rational and less emotional way about the goods and services they consume, because it’s not tied up with their national and regional identities.”
. . .
With its emphasis on early testing and sorting, the educational system in Europe tends to be very rigid. “If you don’t do well at age 18, you’re out,” Professor Moser said. “That cuts out a lot of people who could do better but never get the chance. The person who does best at a test of rote memorization at age 17 may not be innovative at 23.” She added that many of Europe’s most enterprising students go to the United States to study and end up staying.
She is currently doing research into creativity. “The American education system is much more forgiving,” Professor Moser said. “Students can catch up and go on to excel.”
Even the vaunted European child-rearing, she believes, is too prescriptive. While she concedes there is as yet no hard scientific evidence to support her thesis, “European children may be better behaved, but American children may end up being more free to explore new things.”

For the full story, see:
JAMES B. STEWART. “Common Sense; A Fearless Culture Fuels Tech.” The New York Times (Fri., JUNE 19, 2015): B1 & B7.
(Note: ellipses added.)
(Note: the online version of the story has the date JUNE 18, 2015, and has the title “Common Sense; A Fearless Culture Fuels U.S. Tech Giants.”)

We Often “See” What We Expect to See

(p. 9) The Justice Department recently analyzed eight years of shootings by Philadelphia police officers. Its report contained two sobering statistics: Fifteen percent of those shot were unarmed; and in half of these cases, an officer reportedly misidentified a “nonthreatening object (e.g., a cellphone) or movement (e.g., tugging at the waistband)” as a weapon.
Many factors presumably contribute to such shootings, ranging from carelessness to unconscious bias to explicit racism, all of which have received considerable attention of late, and deservedly so.
But there is a lesser-known psychological phenomenon that might also explain some of these shootings. It’s called “affective realism”: the tendency of your feelings to influence what you see — not what you think you see, but the actual content of your perceptual experience.
. . .
The brain is a predictive organ. A majority of your brain activity consists of predictions about the world — thousands of them at a time — based on your past experience. These predictions are not deliberate prognostications like “the Red Sox will win the World Series,” but unconscious anticipations of every sight, sound and other sensation you might encounter in every instant. These neural “guesses” largely shape what you see, hear and otherwise perceive.
. . .
. . . , our lab at Northeastern University has conducted experiments to document affective realism. For example, in one study we showed an affectively neutral face to our test subjects, and using special equipment, we secretly accompanied it with a smiling or scowling face that the subjects could not consciously see. (The technique is called “continuous flash suppression.”) We found that the unseen faces influenced the subjects’ bodily activity (e.g., how fast their hearts beat) and their feelings. These in turn influenced their perceptions: In the presence of an unseen scowling face, our subjects felt unpleasant and perceived the neutral face as less likable, less trustworthy, less competent, less attractive and more likely to commit a crime than when we paired it with an unseen smiling face.
These weren’t just impressions; they were actual visual changes. The test subjects saw the neutral faces as having a more furrowed brow, a more surly mouth and so on. (Some of these findings were published in Emotion in 2012.)
. . .
. . . the brain is wired for prediction, and you predict most of the sights, sounds and other sensations in your life. You are, in large measure, the architect of your own experience.

For the full commentary, see:
Feldman Barrett, Lisa, and Jolie Wormwood. “When a Gun Is Not a Gun.” The New York Times, SundayReview Section (Sun., April 19, 2015): 9.
(Note: italics in original; ellipses added.)
(Note: the date of the online version of the commentary is APRIL 17, 2015.)

The academic article mentioned in the passage quoted above, is:
Anderson, Eric, Erika Siegel, Dominique White, and Lisa Feldman Barrett. “Out of Sight but Not out of Mind: Unseen Affective Faces Influence Evaluations and Social Impressions.” Emotion 12, no. 6 (Dec. 2012): 1210-21.

Steven Johnson Is Advocate of Collaboration in Innovation

(p. A13) Theories of innovation and entrepreneurship have always yo-yoed between two basic ideas. First, that it’s all about the single brilliant individual and his eureka moment that changes the world. Second, that it’s about networks, collaboration and context. The truth, as in all such philosophical dogfights, is somewhere in between. But that does not stop the bickering. This controversy blew up in a political context during the 2012 presidential election, when President Obama used an ill-chosen set of words (“you didn’t build that”) to suggest that government and society had a role in creating the setting for entrepreneurs to flourish, and Republicans berated him for denigrating the rugged individualists of American enterprise.
Through a series of elegant books about the history of technological innovation, Steven Johnson has become one of the most persuasive advocates for the role of collaboration in innovation. His latest, “How We Got to Now,” accompanies a PBS series on what he calls the “six innovations that made the modern world.” The six are detailed in chapters titled “Glass,” “Cold,” “Sound,” “Clean,” “Time” and “Light.” Mr. Johnson’s method is to start with a single innovation and then hopscotch through history to illuminate its vast and often unintended consequences.

For the full review, see:
PHILIP DELVES BROUGHTON. “BOOKSHELF; Unintended Consequences; Gutenberg’s printing press sparked a revolution in lens-making, which led to eyeglasses, microscopes and, yes, the selfie.” The Wall Street Journal (Tues., Sept. 30, 2014): A13.
(Note: ellipses added.)
(Note: the online version of the review has the date Sept. 29, 2014, and has the title “BOOKSHELF; Book Review: ‘How We Got to Now’ by Steven Johnson; Gutenberg’s printing press sparked a revolution in lens-making, which led to eyeglasses, microscopes and, yes, the selfie.” )

The book under review, is:
Johnson, Steven. How We Got to Now: Six Innovations That Made the Modern World. New York: Riverhead Books, 2014.

Plant Breeders Use Old Sloppy “Natural” Process to Avoid Regulatory Stasis

(p. A11) What’s in a name?
A lot, if the name is genetically modified organism, or G.M.O., which many people are dead set against. But what if scientists used the precise techniques of today’s molecular biology to give back to plants genes that had long ago been bred out of them? And what if that process were called “rewilding?”
That is the idea being floated by a group at the University of Copenhagen, which is proposing the name for the process that would result if scientists took a gene or two from an ancient plant variety and melded it with more modern species to promote greater resistant to drought, for example.
“I consider this something worth discussing,” said Michael B. Palmgren, a plant biologist at the Danish university who headed a group, including scientists, ethicists and lawyers, that is funded by the university and the Danish National Research Foundation.
They pondered the problem of fragile plants in organic farming, came up with the rewilding idea, and published their proposal Thursday in the journal Trends in Plant Science.
. . .
The idea of restoring long-lost genes to plants is not new, said Julian I. Schroeder, a plant researcher at the University of California, Davis. But, wary of the taint of genetic engineering, scientists have used traditional breeding methods to cross modern plants with ancient ones until they have the gene they want in a crop plant that needs it. The tedious process inevitably drags other genes along with the one that is targeted. But the older process is “natural,” Dr. Schroeder said.
. . .
Researchers have previously crossbred wheat plants with traits found in ancient varieties, noted Maarten Van Ginkel, who headed such a program in Mexico at the International Maize and Wheat Improvement Center.
“We selected for disease resistance, drought tolerance,” he said. “This method works but it has drawbacks. You prefer to move only the genes you want.”
When Dr. Van Ginkel crossbred for traits, he did not look for the specific genes conferring those traits. But with the flood-resistant rice plants, researchers knew exactly which gene they wanted. Nonetheless, they crossbred and did not use precision breeding to alter the plants.
Asked why not, Dr. Schroeder had a simple answer — a complex maze of regulations governing genetically engineered crops. With crossbreeding, he said, “the first varieties hit the fields in a couple of years.”
And if the researchers had used precision breeding to get the gene into the rice?
“They would still be stuck in the regulatory process,” Dr. Schroeder said.

For the full story, see:
GINA KOLATA. “A Proposal to Modify Plants Gives G.M.O. Debate New Life.” The Wall Street Journal (Fri., MAY 29, 2015): A11.
(Note: ellipses added.)
(Note: the online version of the story has the date MAY 28, 2015.)

Tesla Cars Are Built on Government Subsidies

(p. A13) Nowhere in Mr. Vance’s book, . . . , does the figure $7,500 appear–the direct taxpayer rebate to each U.S. buyer of Mr. Musk’s car. You wouldn’t know that 10% of all Model S cars have been sold in Norway–though Tesla’s own 10-K lists the possible loss of generous Norwegian tax benefits as a substantial risk to the company.
Barely developed in passing is that Tesla likely might not exist without a former State Department official whom Mr. Musk hired to explore “what types of tax credits and rebates Tesla might be able to drum up around its electric vehicles,” which eventually would include a $465 million government-backed loan.
And how Tesla came by its ex-Toyota factory in California “for free,” via a “string of fortunate turns” that allowed Tesla to float its IPO a few weeks later, is just a thing that happens in Mr. Vance’s book, not the full-bore political intrigue it actually was.
The fact is, Mr. Musk has yet to show that Tesla’s stock market value (currently $32 billion) is anything but a modest fraction of the discounted value of its expected future subsidies. In 2017, he plans to introduce his Model 3, a $35,000 car for the middle class. He expects to sell hundreds of thousands a year. Somehow we doubt he intends to make it easy for politicians to whip away the $7,500 tax credit just when somebody besides the rich can benefit from it–in which case the annual gift from taxpayers will quickly mount to several billion dollars each year.
Mother Jones, in a long piece about what Mr. Musk owes the taxpayer, suggested the wunderkind could be a “bit more grateful, a bit more humble.” Unmentioned was the shaky underpinning of this largess. Even today’s politicized climate modeling allows the possibility that climate sensitivity to carbon dioxide is far less than would justify incurring major expense to change the energy infrastructure of the world (and you certainly wouldn’t begin with luxury cars). Were this understanding to become widespread, the subliminal hum of government favoritism could overnight become Tesla’s biggest liability.

For the full commentary, see:
HOLMAN W. JENKINS, JR. “BUSINESS WORLD; The Savior Elon Musk; Tesla’s impresario is right about one thing: Humanity’s preservation is a legitimate government interest.” The Wall Street Journal (Sat., May 30, 2015): A13.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date May 29, 2015.)

The book discussed in the commentary is:
Vance, Ashlee. Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future. New York: Ecco, 2015.

The Mother Jones article discussing government subsidies for Musk’s Tesla, is:
Harkinson, Josh. “Free Ride.” Mother Jones 38, no. 5 (Sept./Oct. 2013): 20-25.

Little Progress Toward Complex Autonomous Robots

(p. A8) [In June 2015] . . . , the Defense Advanced Research Projects Agency, a Pentagon research arm, . . . [held] the final competition in its Robotics Challenge in Pomona, Calif. With $2 million in prize money for the robot that performs best in a series of rescue-oriented tasks in under an hour, the event . . . offer[ed] what engineers refer to as the “ground truth” — a reality check on the state of the art in the field of mobile robotics.

A preview of their work suggests that nobody needs to worry about a Terminator creating havoc anytime soon. Given a year and a half to improve their machines, the roboticists, who shared details about their work in interviews before the contest in June, appear to have made limited progress.
. . .
“The extraordinary thing that has happened in the last five years is that we have seemed to make extraordininary progress in machine perception,” said Gill Pratt, the Darpa program manager in charge of the Robotics Challenge.
Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, Dr. Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships.
“It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”
In fact, the steep challenge in making progress toward mobile robots that can mimic human capabilities is causing robotics researchers worldwide to rethink their goals. Now, instead of trying to build completely autonomous robots, many researchers have begun to think instead of creating ensembles of humans and robots, an approach they describe as co-robots or “cloud robotics.”
Ken Goldberg, a University of California, Berkeley, roboticist, has called on the computing world to drop its obsession with singularity, the much-ballyhooed time when computers are predicted to surpass their human designers. Rather, he has proposed a concept he calls “multiplicity,” with diverse groups of humans and machines solving problems through collaboration.
For decades, artificial-intelligence researchers have noted that the simplest tasks for humans, such as reaching into a pocket to retrieve a quarter, are the most challenging for machines.
“The intuitive idea is that the more money you spend on a robot, the more autonomy you will be able to design into it,” said Rodney Brooks, an M.I.T. roboticist and co-founder two early companies, iRobot and Rethink Robotics. “The fact is actually the opposite is true: The cheaper the robot, the more autonomy it has.”
For example, iRobot’s Roomba robot is autonomous, but the vacuuming task it performs by wandering around rooms is extremely simple. By contrast, the company’s Packbot is more expensive, designed for defusing bombs, and must be teleoperated or controlled wirelessly by people.

For the full story, see:
JOHN MARKOFF. “A Reality Check for A.I.” The New York Times (Tues., MAY 26, 2015): D2.
(Note: ellipses, and bracketed expressions, added. I corrected a misspelling of “extraordinary.”)
(Note: the date of the online version of the story is MAY 25, 2015, and has the title “Relax, the Terminator Is Far Away.”)

George Bailey Wanted to Make Money, But He Wanted to Do More than Just Make Money

(p. 219) Actually, it’s not so strange. The norm for bankers was never just moneymaking, any more than it was for doctors or lawyers. Bankers made a livelihood, often quite a good one, by serving their clients– the depositors and borrowers– and the communities in which they worked. But traditionally, the aim of banking– even if sometimes honored only in the breach– was service, not just moneymaking.
In the movie It’s a Wonderful Life, James Stewart plays George Bailey, a small-town banker faced with a run on the bank– a liquidity crisis. When the townspeople rush into the bank to withdraw their money, Bailey tells them, “You’re thinking of this place all wrong. As if I had the money back in a safe. The money’s not here.” He goes on. “Your money’s in Joe’s house. Right next to yours. And in the Kennedy house, and Mrs. Backlin’s house, and a hundred others. Why, you’re lending them the money to build, and they’re going to pay you back, as best they can…. What are you going to do, foreclose on them?”
No, says George Bailey, “we’ve got to stick together. We’ve got to have faith in one another.” Fail to stick together, and the community will be ruined. Bailey took all the money he could get his hands on and gave it to his depositors to help see them through the crisis. Of course, George Bailey was interested in making money, but money was not the only point of what Bailey did.
Relying on a Hollywood script to provide evidence of good bankers is at some level absurd, but it does indicate something valuable about society’s expectations regarding the role of bankers. The norm for a “good banker” throughout most of the twentieth century was in fact someone who was trustworthy and who served the community, who was responsible to clients, and who took an interest in them.

Source:
Schwartz, Barry, and Kenneth Sharpe. Practical Wisdom: The Right Way to Do the Right Thing. New York: Riverhead Books, 2010.
(Note: italics in original.)

Computers Lack Intuition about How to Handle Novel Situations

(p. A29) It seems obvious: The best way to get rid of human error is to get rid of humans.
But that assumption, however fashionable, is itself erroneous. Our desire to liberate ourselves from ourselves is founded on a fallacy. We exaggerate the abilities of computers even as we give our own talents short shrift.
. . .
Human skill has no such constraints. Think of how Capt. Chesley B. Sullenberger III landed that Airbus A320 in the Hudson River after it hit a flock of geese and its engines lost power. Born of deep experience in the real world, such intuition lies beyond calculation. If computers had the ability to be amazed, they’d be amazed by us.
. . .
Computers break down. They have bugs. They get hacked. And when let loose in the world, they face situations that their programmers didn’t prepare them for. They work perfectly until they don’t.
. . .
We should view computers as our partners, with complementary abilities, not as our replacements.

For the full commentary, see:
NICHOLAS CARR. “Why Robots Will Always Need Us.” The New York Times (Weds., MAY 20, 2015): A29.
(Note: ellipses added.)

Conflict-of-Interest Politics Reduces Medical Collaboration with Industry and Slows Down Cures

(p. A15) The reality of modern medicine, Dr. Stossel argues, is that private industry is the engine of innovation, with productivity and new advances dependent on relationships between commercial interests and academic and research medicine. Companies, not universities or research with federal funding, run 85% of the medical-products pipeline. “We all inevitably have conflicts all the time. You only stop having conflicts when you’re dead. The only conflict-free situation is the grave,” he says.
The pursuit of the illusion “to be pure, to be priestly, to be supposedly uncorrupted by the profit motive,” Dr. Stossel says, often has the effect of banishing or else discounting the expertise of the people who know the most but whose integrity and objectivity are allegedly compromised by industry ties. What ought to matter more, he adds, is simply “Results. Competence. LeBron James–it’s putting the ball in the basket.”
. . .
Zero-tolerance conflict-of-interest editorial policies, Dr. Stossel says, suppress and distort debate by withholding positions of authority. “If you have an industry connection, if you really understand the topic, you can’t say anything,” he notes. “If you’re an editor, and you have an ideological predilection, you have all this power and you can say anything you want.”
Dr. Stossel is equally scorching about the drug and device companies and their trade organizations, which he says drift around like Rodney Dangerfield, complaining they don’t get no respect. They prefer not to be confrontational, they rarely fight back against the conflict-of-interest scolds. “They’re laying responsibility by default to the patients, the people who actually have a first-hand connection to whatever the disease is: ‘Goddammit, I want a cure.’ ”
Which is the larger point: The to-and-fro between publications not meant for lay readers can seem arcane, but the product of conflict-of-interest politics is fewer cures and new therapies. The predisposition against selling out to industry is pervasive, while reputations can be ruined overnight when researchers find themselves in a page-one exposé or hauled before Congress, even if there is no evidence of misconduct or bias.
Better, then, to conform in the cloisters than risk offending the conflict-of-interest orthodoxy–or translating some basic-research insight into a new treatment for patients. Dr. Rosenbaum reports: “The result is a stifling of honest discourse and potential discouragement of productive collaborations. . . . More strikingly, some of the young, talented physician-investigators I spoke with expressed worry about how any industry relationship would affect their careers.”
. . .
‘Pharmaphobia”–part polemic, part analytic investigation, a history of medicine and a memoir–deserves a wide readership. . . . “I’d rather get a conversation started with people who are smarter than I am about how complicated and granular and nuanced and unpredictable discovery is. Let’s not slow it down.”

For the full interview, see:
JOSEPH RAGO. “The Weekend Interview with Tom Stossel; A Cure for ‘Conflict of Interest’ Mania; A crusading physician says medical progress is hampered by a holier-than-thou ‘moralistic bullying.’.” The Wall Street Journal (Sat., June 27, 2015): A15.
(Note: ellipses added.)
(Note: the online version of the interview has the date June 26, 2015, and has the title “A Cure for ‘Conflict of Interest’ Mania; A crusading physician says medical progress is hampered by a holier-than-thou ‘moralistic bullying.’.”)

The book mentioned in the interview, is:
Stossel, Thomas P. Pharmaphobia: How the Conflict of Interest Myth Undermines American Medical Innovation. Lanham, MS: Rowman & Littlefield Publishers, 2015.

Intel Entrepreneur Gordon Moore Was “Introverted”

(p. A11) “In the world of the silicon microchip,” [Thackray, Brock and Jones] write, “Moore was a master strategist and risk taker. Even so, he was not especially a self-starter.” Mr. Moore possesses many of the stereotypical character traits of an introverted Ph.D. chemist: working for hours on his own, avoiding small talk and favoring laconic statements. Indeed, as a manager he often avoided conflict, even when a colleague’s errors persisted in plain sight.
. . .
After two leadership changes at Fairchild in 1967 and 1968, which unsettled its talented employees, Mr. Moore departed to help found a new firm, Intel, with a fellow Fairchild engineer, the charming and brilliant Robert Noyce (another of the “traitorous eight”). They also brought along a younger colleague, the confrontational and hyper-energetic Andy Grove. Each one of the famous triumvirate would serve as CEO at some point over the next three decades.

For the full review, see:
SHANE GREENSTEIN. “BOOKSHELF; Silicon Valley’s Lawmaker; What became Moore’s law first emerged in a 1965 article modestly titled ‘Cramming More Components Onto Integrated Circuits’.” The Wall Street Journal (Tues., May 26, 2015): A11.
(Note: ellipsis, and bracketed names, added.)
(Note: the online version of the review has the date May 25, 2015.)

The book under review is:
Thackray, Arnold, David C. Brock, and Rachel Jones. Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary. New York: Basic Books, 2015.