Buckle up


I’m writing this in 2023. I figure, if you’re too young to be President of the United States (less than 35 years old), you probably can’t imagine what it was like before the Internet.

I’m a 1974 model. When I was six, my grandparents left their TRS-80 home computer at our house, to watch it for them while they went on vacation. They also left the manual. I read it. I learned how to use BASIC to write a program. That was it. I was hooked.


My first computer was a Commodore VIC-20. My parents bought it for me shortly after my encounter with the TRS-80, because they could see how entranced I was. Obsessed, really. For whatever reason, I got computers. The goal was always to push the machine’s boundaries, to see what I could make it do. Even on these very early machines, whose capabilities were laughable by today’s standards, their possibilities seemed endless to me. There was a universe contained inside a small box. I would invent a project for myself, then another. Most went unfinished, but I was learning, exploring.


These machines had no Internet connection. Technically an Internet existed, but I knew nothing about it at the time. It was the province of universities and the military. Ordinary people had never heard of it.


Around my teenage years, my parents purchased a modem for my Commodore 64. It plugged into a standard phone line (which, these days, most people don’t have). They also bought a subscription to a service called Compuserve. You could use the phone line to make your computer call another computer. Words would appear on the screen. The universe I explored had always been inside the box, but here was a connection to a much larger world. And for some reason, I didn’t immediately “get it”. The lightbulb didn’t go on. Not yet.


I was around 18 when I became truly aware of the Internet and what it meant. By then, I was using an IBM PC clone, and the modem connected me to a local university’s system. That system ran Unix, an operating system that was entirely new to me. It made sense, I could comprehend it, but it was almost like starting over. I had to read books on Unix to understand how to navigate and do basic things.


Since it was owned and operated by the university, it had an Internet connection. I started having real-time typed conversations with people at other universities. Finally I became aware that the Internet made it possible to interact with people, and programs, anywhere in the world. If the possibilities of computers had seemed limitless before, now they were mind-boggling.


Outside of academia, most people became aware of the Internet in the 90’s. It was during this decade that the Netscape browser took off. Now the Internet wasn’t limited to flat text. It was formatted text, and there were pictures (though they often took a while to load). There were forms you could fill out, the first hints of two-way interactivity. There were links you could click, taking you to completely new places. They called this “hypertext”. The limits had been pushed again. The World Wide Web was vast, it was growing by the day, and it was almost totally disorganized.


Not long after this, search engines appeared to help navigate this unruly tangled web. There were several, and they competed to offer the best single entry point to finding the right website to achieve your current goal. None of them were very good at this, so there was a meta-search-engine that would search all the others, and then collate the results.


Then Google arrived, and blew all the other search engines out of the water. They all became irrelevant overnight. There was no point using any competing search engine, when Google’s results were so consistently of higher accuracy. By taking into account how many other websites were linking to this one, they’d cracked the code for revealing which websites were the most useful to people.


This has been the story I’ve been privileged to witness, a series of revolutionary developments, each one expanding what’s possible. I could go on, and talk about cloud computing and social media. But all of this is really just the backdrop, because the next big wave of exponential expansion has already hit us.


What’s happening now, with generative AI and Large Language Models, is as momentous to me as the arrival of the Internet Age.


I can open the Bing app on my phone, and enter several sentences explaining what problem I’m trying to solve. After several seconds, an answer comes back. It’s as if I could, in moments, track down a person who knows the answer, and get them to tell me.


If you’re not a software developer, you may not be able to appreciate what’s already happened. Machine Learning systems are so, so different from “normal” computer programs that simply… crunch numbers. Programming, that thing I learned to do as a kid, is about translating your intentions into the machine’s language. The languages have grown increasingly powerful and sophisticated over time, but they retain their essence: at bottom, they are detailed instructions for what to do.


Machine Learning (ML) systems are designed to mimic brains. Much like animal brains are interconnected webs of neurons, ML brains are likewise called neural networks. You don’t program them, you train them. You feed them a vast number of inputs, for which you already know the corresponding desired outputs, and the system adjusts its neural networks to reinforce its successes and minimize its failures. Just like your brain, or mine.


This is how they solved the problem of recognizing handwriting on checks. No one has yet written the computer algorithm, the carefully thought-out set of instructions, describing how to recognize a style of handwriting, or to recognize a face or a voice. Perhaps no one ever will. The problem was solved instead by ML.


This is also how AlphaGo beat world-champion Go players. First, they trained it on games of Go by human players. Then, as it improved, they started training it by having it play against itself. It became better at playing Go than anyone who was working to develop the system. This is the shocking idea: We can build machines that are smarter than the builders, at least at a specific task. This is 100% impossible with normal programming, because normal programming requires you to understand how to do something and then teach it to the machine. What you get out cannot exceed what you put in (except that it happens a lot faster).


ChatGPT is an example of a particular type of ML system, known as a Large Language Model (LLM). These systems have been trained on, essentially, the sum total content of the Internet. Specifically, they’ve been trained to take a few words, then predict what words are likely to appear next. They are autocomplete systems. But that is not a reason to be dismissive of them. If a machine can predict what a human would do next, that implies some level of understanding of how humans think – precisely the thing that traditional computer programs have always lacked. The important question is how far such systems are capable of going… and if the approach does have limits, whether it’s possible to tweak the idea to push the limits any further.


ML systems require a lot of horsepower, which may be why it’s taken this long for systems like ChatGPT to emerge. Moore's Law states that processing power per dollar will roughly double every two years. We had to wait for Moore’s Law to bring us sufficiently large neural networks, to be able to train them on the entire content of the Internet. Now, it appears, we must reckon with AI systems that can answer questions in just about any field. They are generalists, jacks-of-all-trades.


I cannot overstress how startling this is: You can now “program” a computer by typing English sentences into it. English! As a choice of programming language, English is the worst. It’s riddled with ambiguities, words that mean different things depending on context clues. And I’ve seen YouTube videos of people using these AI systems, where it’s easy to spot the misspellings and the typos. With programming languages, you’re sunk if you have a comma or a period out of place. These AI systems aren’t fazed by any of it. Somehow, people type barely comprehensible garbage into them, yet they respond to what the author intended it to mean. This shit shouldn't work.


There’s a sort of joke in software development, about a DWIM system: “Do What I Mean”. I guess the joke is finally on us.


Never mind that these systems aren’t conscious and lack agency. Those are expectations set by Hollywood blockbusters.


Never mind that it’s expensive to run them, or that they can get confused by certain types of questions, or even that they sometimes “hallucinate” (make stuff up). Those are problems that can and will be addressed, by humans iterating on solving the problems, and by further application of Moore’s Law.


The important point is that a critical threshold has been surpassed. We have a proof-of-concept. It’s like when Google surpassed all the other search engines. And Google is right to be nigh-panicked by the development. I didn’t think anything would convince me to use Microsoft’s Bing instead of Google, but I use it now. With some nervousness.


I’m going to repeat something now, and I’ll likely keep repeating it because it's key. I do not believe the human brain contains a magical component. I believe it’s all built of neurons, physical stuff. ML systems are imitations of brains, and they learn like brains do. I would expect that all human behaviors can eventually be replicated by machines. They just need sufficient horsepower and the right inputs. At most, to fully replicate human thought, we might need to build neural nets in a somewhat different design (to permit recursive signal loops perhaps), or add extra elements corresponding to other, supporting structures in our brains. It's fundamentally solvable.


Thanks to Moore’s Law, you can expect systems like ChatGPT to be at least twice as powerful in 2025 as they are today. I say "at least" (double power is a lower bound) for a few reasons.


First, the startling abilities of ChatGPT have opened the eyes of industry leaders, and there's now an arms race between Microsoft, Google, and others. Massive investment has poured into OpenAI and any/all other outfits in a position to make rapid progress, coincidentally during a time when a downturn in the software industry (a time of mass layoffs and falling stock prices) left lots of loose money on the table.


Second, these generative systems, such as Copilot, enable more rapid software development, estimated at anywhere from 33% to 50% faster depending on the circumstances. We can expect the engineers working on the technology to use it to accelerate their own work.


Third, when neuroscientists get their hands on this technology and it accelerates their work, it could lead to more breakthroughs into understanding human brains, that feed back into building still better ML systems.


And fourth, a quantitative increase in power can produce some qualitative leaps as the system crosses significant thresholds. A notable example is theory of mind, the ability to maintain a mental model of what someone else has in their mental model. The classic example: Alice enters a room, placing an apple in a box on the left, then exiting the room. Bob enters the room, transfers the apple to the box on the right, then exits. Alice then returns for the apple. Which box will she open? Obviously, the one on the left, because she was not present when Bob moved the apple. Little children get this wrong. ChatGPT version 3.5 (not the new version 4!) was found to have theory of mind capability, comparable to that of a nine-year-old.


What's particularly alarming is how nobody noticed this threshold had been crossed, until after the capability had been available to the public for months. Furthermore, nobody was aiming to teach the system how to do this. It is an emergent property, i.e. it "just happened". We don't understand the workings of our own brains well enough to explain why that threshold lies at that particular computational level. But apparently, it emerges when the neural net and/or training dataset are big enough.


By extension, if the system's raw processing power continues to be increased, we don't know what other cognitive capabilities might emerge, and when. A doubled or even tenfold quantitative difference could pale in comparison to the resulting qualitative difference. 


As if that's not enough to worry about, here's a jaw-dropper: We only know what theory of mind is, because we have that capability ourselves. What other capabilities would humans possess if we were substantially smarter than we are? We are literally unable to know or predict this, perhaps even to conceive it. The whole point about emergent behaviors is that you can't predict them. They just, y'know… emerge. To find out, we'd have to artificially boost our own intelligence somehow. We don't know how. But we do know how to make these AI systems smarter. We might expose ourselves to surprising new styles of thinking that are utterly beyond us.


Humans dominate the planet, thanks to brains which are just a bit larger and more complex than those of our ape cousins. Though the difference is small, the outcome is huge: We are the world's zookeeper. If we build an AI that is smarter than us, we might find ourselves in its zoo.


I’ve long believed that AI would materialize, but I didn’t think it’d happen for another 20 years or so. Now, I expect the next five years to be extremely interesting.


Here’s a question I just posed to Bing, and its answer:



Q: There are robots that are capable of human-like movements, such as climbing stairs, or jumping and then landing on their feet. What part does Machine Learning play in how these robots operate?


A: Machine Learning plays a significant role in how robots operate, especially when it comes to performing human-like movements such as climbing stairs or jumping and landing on their feet. For example, researchers have investigated obstacle detection and autonomous stair climbing methods for miniature jumping robots (MJRs) ¹. They proposed an obstacle detection method based on a combination of attitude and distance detections, as well as MJRs’ motion ¹. They also proposed an autonomous stair climbing algorithm based on the obstacle detection method. The robot can detect the height and width of stairs and its position relative to the stairs and then repeatedly jump to climb them step by step ¹.


Is there anything else you would like to know?


Source: Conversation with Bing, 5/1/2023

(1) Obstacle detection and autonomous stair climbing of a miniature jumping .... https://www.sciencedirect.com/science/article/pii/S2667379722000456

(2) A study on a wheel-based stair-climbing robot with a hopping mechanism .... https://www.sciencedirect.com/science/article/pii/S0888327008000733

(3) Obstacle detection and autonomous stair climbing of a miniature jumping .... https://www.researchgate.net/publication/366598179_Obstacle_detection_and_autonomous_stair_climbing_of_a_miniature_jumping_robot

(4) A mobile robot with autonomous climbing and descending of stairs. https://www.cambridge.org/core/journals/robotica/article/mobile-robot-with-autonomous-climbing-and-descending-of-stairs/FE6589BA76CBB21C2E56BA659F7C5A42.


OK, so… robots are getting dramatically better at moving around, and that too is an application of ML. If this can be used to solve the problems of walking and jumping, it can likewise be used to solve the problems of building hands that can manipulate objects as deftly as human hands.


Generative AI has proven able to compose small snippets of programs, or even entire small programs. It ain’t perfect, but it’s serviceable, and many software developers already use it to get their jobs done faster. I can see the end of my job from here – or at least, I can see the end of me doing it as I have done since I was six. AI systems can do a range of intellectual tasks, today: ChatGPT can pass the bar exam, and AI systems can help to diagnose medical conditions. They're unable to do other jobs merely due to being incorporeal. Given time, AI systems driving robot bodies will demonstrate the ability to do manual labor, like carpentry and plumbing. I can see the end of your job from here, too. No matter what your job is. Yes, even that job. Yes, that one, too.


I’ve greeted all the previous big waves of computer tech as exciting. This one feels different, equal parts exciting and scary. I’d expect the upheaval to exceed the impact on society of the Internet, and nobody is quite sure what’s in store for us. Being unsure what’s about to happen next, thanks to an exponential rate of technological progress (or even a double exponential), is a hallmark of the singularity, an idea proposed by John von Neumann and popularized by Ray Kurzweil.


Don't take my word for it. You know who's worried about the potential for negative impacts of AI on society? AI experts. This founder of the field just left Google so he could help to raise the alarm.


More takes on all this:

The A.I. Dilemma, an hour-long video from March of this year in the style of a TED talk, helps to explain why Generative AI has been taking off as a field. Key takeaway: There used to be several disparate AI sub-fields. Now there is one unified field, in which progress in one sub-field produces advances for all sub-fields. Another key point: They compare the potential impact on society with that of social media, and note that it can be difficult to address unanticipated problems after a new technology has become embedded in society’s workings.


Sparks of AGI: What to Know summarizes a paper reviewing some of GPT-4’s capabilities. AGI stands for Artificial General Intelligence. Development of an AGI is a major milestone leading to the singularity. The question is when it might happen, or indeed, whether we have already passed that point.


How to build a brain (a video released just this week) goes over some of the defining features of intelligence. The author points out that AutoGPT – a project built on top of GPT-4’s Application Programming Interface (API) – quickly addressed two of GPT’s shortcomings: the ability to plan, and to remember beyond one chat session. AutoGPT is an open source project, is “about a month old”, and the core functionality was developed in “just a few days”. This suggests that what features of reasoning GPT-4 lacks, traditional programming can provide rather easily. AutoGPT may already qualify as an AGI. If it doesn’t, it certainly suggests that the technology is close – not 40 or 80 years away, as previously believed by experts.


I used to buy Ray Kurzweil’s estimate that we’d reach the singularity around 2045, but I now suspect it could arrive inside of this decade. If I’m wrong, I’ll happily eat crow. I’d be happy if we could all slow this down a little.


I’m not alone in being nervous about the pacing. A number of prominent people signed an open letter in March, asking AI research labs, such as OpenAI, to slam on the brakes. They’d like everyone to refrain from releasing anything more powerful than the current GPT-4. “Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque.”


Superintelligence, a book by Nick Bostrom, explores the possible impact of AGI, in particular:


  • That AGI could lead quickly to ASI (Artificial Super-Intelligence) by acting as a self-improving AI researcher

  • That an ASI doesn’t have to be conscious or self-aware to be an existential threat

  • That it is extremely challenging to retain control over something smarter than yourself


Remember that AlphaGo was able to beat human players by learning from itself. In the A.I. Dilemma video, the presenters, Tristan Harris and Aza Raskin, suggest the possibility of an AI system using a similar technique to improve its ability to form a convincing argument. This is a danger pointed out by Bostrom: an ASI would be skilled at persuading humans to do things. It could manipulate us.


Now recall, nobody is suggesting these systems are conscious or self-aware, so they would be acting on behalf of whoever instructed them. But that’s hardly any better. We’ve already been manipulated at a societal level by misinformation, thanks to Facebook and other social networks. Merely by optimizing to increase engagement, they become addictive, and their recommendation algorithms can send people down “rabbit holes” that end with radicalization and/or conspiracy theorizing. Once someone is convinced that the Earth is flat, that the moon landings were faked, that prominent Democratic leaders are human traffickers, or that the election of Joe Biden was illegitimate, there’s little anyone can do to reason with them.

AI Safety is a relatively young field. As Harris and Raskin point out, the big corporations funding AI research have little incentive to slow down or to invest in safety features. They’re in an arms race. Far less time and money are being invested in safety, than in seeking to increase these systems’ capabilities.

The AI Stop Button problem is a video from six years ago, explaining why it’s actually really difficult to “simply shut off” an AGI system. The system will come to understand that you could shut it off, and unless its goal is to be shut off, it will try to stop you from doing that – not because it’s evil or afraid of you, not because it cares one whit about its own existence, but because its goal is to do something else. When it sees you trying to shut it off, you become an obstacle to its goal.


One possible way out is to hide the existence of the Stop Button from the system. But an AGI may discover it, because we aim to teach AGI systems to know as much as possible about the world. It’s difficult to teach it to know all about the world except this one fact.


Around 15:15 in that video, Rob Miles talks about the “patching approach”. This is the game of whack-a-mole, in which you try to list all the bad things a system shouldn’t do. The trouble, unsurprisingly, is that it’s very difficult to imagine all the bad things a system might do. If you block 98 of them, the 99th will get you. Notably, today’s LLM systems have safeguards of exactly this sort. This is why, if you ask them questions about specific people like Donald Trump, you’re liable to get a non-answer. A patch has blocked you. Now that over a billion people (as of March) are ChatGPT users, the patching approach seems certain to fail somehow. Enterprising people will find holes in the defenses.


I apologize if this post has seem rushed, or not up to my usual standards. I’ve felt an urgency to get it published, and may have made an error or two. This AI issue, like climate change, is a Pandora’s box that we have already half-opened. To me, it is critical that everyone be aware of how quickly our world might change.

Comments

Popular posts from this blog

The alien meets Kenny Rogers

Can a former US President ever face justice?