Steve Wozniak – the ethical geek


In a wide-ranging conversation with Andrew Masterson, the Woz explores the cultural implications of the technology he, more than anyone, helped create.


Steve Wozniak believes the erosion of liberty through data surveillance is more likely than a robot coup. – Getty Images

“To me, the human should be more important than the technology,” says Steve Wozniak.

“The way we think, use our brains, interact with each other, get around with our lives – these should be more important than the technology. The technology should adapt to our ways, our customs, and our concepts of what is right and wrong.”

So, technology needs to fit in or push off – given the speaker, it’s a fascinating statement of principle.

Any dinner party discussion about who has most influenced life in the modern era is bound to include Steve Jobs, Bill Gates, Mark Zuckerberg, Jeff Bezos and Elon Musk.

But if that dinner party includes guests drawn from the computer programming and electronic engineering fraternities, the answer will be different, and definite. The Woz, they will say, no question.

Wozniak, 66 this year, is arguably the least known yet most important of the young counter-cultural, convention-busting rebel idealists who in the seventies set in motion the technological innovations that led – remarkably rapidly – to home computers, laptops, the internet, smart devices, social media, the sharing economy and, may the world forgive them, Bluetooth-enabled pregnancy tests.

In 1976, in Steve Jobs’ California bedroom, Wozniak single-handedly built Apple’s first computer, largely out of timber.

It completely reconfigured existing computer designs, which were primarily either cumbersome mainframes or Atari arcade game consoles.

The Apple l, the first Apple computer made by Steve Wozniak in 1976, here on display at Sotheby's in in 2012. – Andrew Burton/Getty Images

Setting up a dynamic that was to eventually catalyse one of the largest corporations the world has ever known, what Wozniak built, Jobs marketed.

Selling his seriously hippy VW Kombi to fund the manufacture, Jobs flogged 50 units to a start-up computer store in Mountain View.

Apple was in business.

A year later, having shifted into Jobs’ garage, the Apple II was launched – the first home computer to include printed circuits, keyboard, screen, and information storage capacity all in one unit. It remained in production until 1993.

By then, though, Woz woz gone. In 1985, he decided he was seriously jack of corporate life, sold most of his stock, and walked away from a fortune.

Since then he has continued to innovate.

Steve Jobs and Steve Wozniak, co-founders of Apple Computer Inc, at the first West Coast Computer Faire in April 1977. – Photo by Tom Munnecke/Getty Images

In 1987, for instance, he invented arguably the most useful gadget of that decade – the programmable universal remote control. The Woz’s importance, however, stems from rather more than his oft-proven ability to wire up a motherboard.

Perhaps unusually for a computer geek, he is possessed of a keen understanding of the many ethical and cultural implications of the ubiquitous technology he helped create.

In 1990, he co-founded the Electronic Frontiers Foundation, now an international organisation that provides legal support for tech whistleblowers and prosecutes political campaigns in defence of net neutrality and online civil liberties.

The internet is not inherently a force for positive change and Woz recognises that behind every story of life-enhancing breakthrough there lurks the possibility of malevolence.

“I do feel badly that we first lost out to technology more than 200 years ago,” he says. “We will fire a human being, but we won’t fire a machine that makes us cheap clothing.

“By the way, those machines didn’t make our jobs go away, necessarily – they just switched what kind of jobs they were.

“The industrial revolution and the scientific revolution – if you want to, you can look at them very dystopianly, as things that pulled us out of our humanity. But we don’t really think that way. We feel very happy with the technology we have today.”

And therein, perhaps, lies the problem.

The astounding reach of the big tech companies, allied to a certain Jetsons-like naïve enthusiasm on the part of consumers, means that the narratives of the digital age are dominated by the marketers, and generally unchallenged by critics.

Small changes to new model smartphones receive disproportionate attention; faults and flaws are glossed over with an exceptionalism that would be untenable in any other field; purchasers are propelled into accepting an ever-narrowing choice of brand-compatible peripherals.

“Computers and the internet have allowed us to manage things of immense size that we never could manage before, and large companies are able to hold a market and not let young innovators and new products have an entry,” says Wozniak.

“They base that largely on a lot of communication and control, understanding their users. I think that will continue to happen independently. It’s sort of a shame, because it is a sort of dystopian vision, compared to today.“

There is another anxiety-inducing vision, of course, that lurks just beneath the shiny veneer of technological promise – the Asimovian idea of humanity being subjugated and controlled by intelligent machines.

Ironically, the possibility of machines one day staging a coup d’etat is vanishingly less likely than the gradual erosion of individual liberty through data surveillance and consumer-restrictions via copyright enforcement and digital rights management, but it is undeniably easier to visualise.

The rise of the machines is a trope of science fiction that has lasted from Dr Who to Chappie and shows no sign of going away any time soon.

To Wozniak, such folk tales are a bit of a distraction. He is a big fan of machine learning – the love child of computer science and statistics that replaces “dumb” robots with AI-based programs that are able to learn, adapt and change according to input and circumstance.

The rise of the machines is a longstanding trope and usually, as with the movie 'Chappie', a negative one. – Media Rights Capital

Search engines are a neat example of machine learning. They adapt their searching behaviours to individual users, returning results that are (at least theoretically) ever more in sync with user preferences as they repeat the process more and more often. If the function is enabled, they also learn to predict what individual users might be interested in, based on previous results.

“Let’s look at the past,” says Woz.

“Machines could be programmed and built to serve a task, and they could do it very rapidly. If they replaced humans, they still didn’t have insight. They didn’t have thinking on their own. They didn’t say, ‘What is the problem with these results that I could attack from a different viewpoint?’ They didn’t have that level of consciousness.

“However, the learning machines, those coming along more recently, have a very good simulation of that. It’s not perfect – I don’t think any learning machine can say, ‘What should I learn?’

“The machines are not at that level, but, boy, do they wind up doing certain types of task that involve the brain, that involve learning. They learn much faster and better than humans, and essentially exhibit a type of better thinking.”

This, of course, depends largely on how you define “better”.

Google operates in-house machine learning software called TensorFlow that drives the company’s increasingly accurate translation service, among other things.

Machine learning operates, on one level, as a form of pattern recognition. Therefore, the more data that is available to a machine-learning program, the more patterns it is able to discern.

For this reason, in late 2015 Google made TensorFlow open source. The logic was solid: the more times the software is used, the better it will operate.

But improvement is not always the end result of the process.

In March this year, Microsoft launched a prototype AI chatbot called Tay. The idea was for people to tweet happy inconsequential bits of small talk to it, to which it would respond in increasingly sophisticated and conversationally appropriate ways.

In its first 24 hours of being online Tay got trolled big time, bombarded by misogynistic and racist bilge. Before long, Tay started swearing, praising Hitler and insulting women.

Ray Kurzweil at the event 'Expanding Our Intelligence Without Limit’, during the 2012 SXSW. Woz does not share his fear of the singularity. – Getty Images

US tech journalist James Vincent described the chatbot as “essentially a robot parrot with an internet connection”.

Learning, whether through AI or a kindergarten, is critically dependent on the mindset of the teacher. To Wozniak, the promise of machine learning well outweighs the problem of rogue entities – whether they be bigoted or intent on taking over the world.

“It’s a mix,” he says. “The fact is that no machine is superior to a human and will tell the human to just do menial tasks. We have machines that help us out a lot. We employ them to help us so that we don’t maybe have to work as hard, or so that maybe we can move onto other things for which our brain is better suited, to other sciences for instance.”

That’s the theory, anyway.

There is also another theory, promulgated to great effect a decade ago by New York computer scientist and futurist Ray Kurzweil, that the rise to dominance of intelligent machines is not only possible but inevitable.

Kurzweil predicts that in the near future, networked artificial intelligence will reach a point where it will be endlessly able to build better versions of itself. Eventually, it will manifest in a level of intelligence that is incomprehensible to humans, who must then relinquish control.

That point Kurzweil dubbed “the singularity”. He suggests it will occur sometime around 2045. Wozniak doesn’t disagree.

“The singularity as Ray defines it – and he’s a brilliant person – is very near,” he says.

“It’s probably still on his schedule for 2045. But the singularity is misunderstood.”

Wozniak says it is not that computers will replace people’s brains.

“What it really means is that computing equipment will be using as much information, and processing it as much, as all of the human brains put together – and you cannot predict what that means. You cannot predict a path after the singularity.”

Despite this uncertainty, the Woz is gloriously unworried by what might lie beyond the veil.

By definition, there can be no logical reason for such equanimity, but Woz, it should be remembered, understands the nature of computers much more viscerally than pretty much anybody else on the planet.

Perhaps he trusts his gut. Perhaps the rest of us should, too.

“I do not think it’s anything to fear,” he says. “What if just one machine – the internet of the world, if you will – had consciousness of a sort? That wouldn’t mean machines had taken over and had become our masters.

“I don’t think there will ever be a master. Even if that happened in 200 years, the machines would love us – the way we love our dogs.”

Steve Wozniak will be in Australia in August for a capital city speaking tour.

Event dates:
Wed 24 August 2016 | HBF Stadium, Perth
Fri 26 August 2016 | BCEC, Brisbane
Sat 27 August 2016 | MCA, Melbourne
Sun 28 August 2016 | ATP, Sydney

Tickets on sale at www.thinkinc.org.au

  1. http://www.thinkinc.org.au/
Latest Stories
MoreMore Articles