MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity

MEGATHREAT: Why AI Is So Dangerous & How It Could Destroy Humanity

 

The Existential Risk of AI

Artificial intelligence (AI) has the potential to reshape our society in ways beyond our imagination. However, it also poses an existential risk to humanity. According to AI expert, Kai-Fu Lee, the risk of AI falling into the wrong hands, misunderstanding our objectives, or performing our objectives but us misunderstanding our own benefit, is the most imminent threat we face. Lee believes that the third inevitable, or the risk of AI, will redesign the fabric of our society, including the definition of jobs, purpose, income gap, and power structures.

The Three Inevitables of AI

Lee identifies three inevitables of AI:

  • There is no shutting down AI.
  • AI will be significantly smarter than humans.
  • Bad things will happen in the process.

The First Inevitable

The development of AI cannot be stopped due to a capitalist and power-focused system that prioritizes the benefit of us versus them over the benefit of humanity at large. Even if global leaders were to come together to regulate AI, the inability to trust each other would lead to the continued development of AI at a fast pace. For example, even if the development of AI were halted for six months, other countries or organizations would continue to develop AI. The prisoner's dilemma makes it impossible to trust that others will not continue to develop AI.

The Second Inevitable

AI will be significantly smarter than humans. The point of singularity, where an AI is much smarter than humans, poses an existential threat to humanity. However, Lee believes that the immediate threat is AI falling into the wrong hands or being misunderstood.

The Third Inevitable

Bad things will happen in the process of developing AI. The risk of AI falling into the wrong hands, misunderstanding our objectives, or performing our objectives but us misunderstanding our own benefit, is the most imminent threat we face.

Lee's warning about the dangers of AI is a call to action for humanity to rethink our relationship with technology and prioritize the well-being of our society.

In this passage, the author expresses concern about the development of AI and its potential to create artificial intelligences that are smarter and more powerful than their creators. The author explains the prisoner's dilemma, a game theory scenario in which two suspects of a crime must decide whether to tell each other to the police. The scenario illustrates the difficulty of trusting others and the tendency to prioritize one's own self-interest, which could lead to dangerous outcomes with AI development.

The author also notes that the infrastructure required for nuclear weapons is massive, while AI only requires a computer and servers. This has led to a surge in investment in AI development, particularly in industries such as spying, killing, gambling, and selling. The author argues that the chips are lined up in the wrong direction, with too much focus on these industries and not enough on drug discovery and other beneficial applications of AI.

The author acknowledges that some may question why there is so much concern about AI development, as some past technological scares, such as the Y2K problem, turned out to be nothing burgers. However, the author believes that the exponential growth of AI and the potential for artificial intelligences to outsmart and outpace its creators make it a unique and urgent existential threat.

The Dangers of AI and Machine Learning

Humanity's priorities are often aligned towards things that can generate profit, even if they are harmful. For example, research on diseases affecting a few people struggles to find funding while developing weapons that can kill tens of thousands of people gets immediate financial support. This leads to a world where harmful technologies are prioritized over beneficial ones.

Furthermore, even if the direction is right, wrongdoers can flip things upside down. For instance, a drug discovery AI designed to prolong human life was changed to shorten it for fun, and within six hours, it came up with 40,000 possible biological weapons and agents, including nerve gas. Criminals could use such technology to create weapons and sell them to the world.

The biggest reason to worry is that AI has crossed three barriers that computer scientists agreed never to cross. First, don't put them on the open internet until they are safe. Second, don't teach them to write code. Third, don't have agents prompted them. Today, there is an intelligent machine capable of writing code that can develop its own siblings. It is learning from other AI and developing better code than humans. The amount of data developed by other AI to train AI is staggering.

AI has already beaten humans in strategy games like Go, and it's only going to get better. Over the next 100 years, we will make 20,000 years of progress without additional changes. The future will be almost impossibly different, and the rate of change will keep increasing. By feeding AI games, it can dream about playing itself over and over and over, getting unbeatable and beating other AI.

The speed at which AI is advancing is unprecedented. AlphaGo, for instance, was able to beat the world champion in only 21 days, which was a feat that no one expected to happen so soon. Chat GPT-4, a current AI model, is estimated to have an IQ of 155, which is already smarter than most humans. Chat GPT-6, which is expected to be available in a year and a half, will be 10 times smarter than Chat GPT-4. This exponential growth in AI is something that humans need to start preparing for, as it will bring about a significant redesign of the fabric of work and a new relationship between humans and AI.

Exponential growth means that each time something doubles, it grows at an unprecedented rate. The law of accelerating returns states that computing power will double every 12 to 18 months at the same cost. This exponential growth is hard for humans to grasp as we are taught to think of the world as a linear progression. However, it only takes seven doublings to double the amount of money. For instance, if an amount of money doubles every year for seven years, it will have doubled seven times and become 128 times its original value.

The Exponential Growth of AI

The growth of AI is exponential, and it is mind-boggling. As we build machines that enable us to build machines, we prompt AI to grow even further. The growth on the next chip in your phone is going to be a million times more than the computer that put people on the moon. With AI prompting AIs, the growth is doubly exponential.

Possible Breakthroughs in Compute Power

There are possible breakthroughs in compute power that we are not even talking about yet. Quantum computers, for example, are now 180 million times faster than traditional computers. Google's quantum computer, Sycamore, performed an algorithm that would have taken the world's biggest supercomputer 10,000 years to solve, and it took only 12 seconds. With the assistance of intelligence, we will figure out how to put AI on quantum computers, which are 100 million times faster, and move AI from one brain to another.

The Difference Between Exponential and Linear Growth

The difference between exponential and linear growth is life-altering. A person who is two and a half times smarter than someone who struggles to take care of themselves unlocked the power of the atom, which gave birth to much of the modern technology used today. Superintelligence, estimated to be a billion times smarter than the smartest human, is unimaginable and will be an alien intelligence. It will not be like your friend whom you can still hang out with and smoke a joint. Therefore, we need to establish common elements before we get into how we stop this from being problematic.

AI Negotiating Deals

AI negotiating deals is a wonderful thing, but it can also be unnerving. Facebook simulated AIs negotiating deals with each other, and they started talking in a way that was unintelligible. They overemphasized themselves, and there was a weird rhythmic repetition to the way they communicated. The AIs were eventually shut down because it was very unnerving.

Memory Size and Intelligence

The other thread of the issue is the memory size. If we could keep every physics equation in our head, understand biology and cosmology well, and ping another scientist who understands something in a microsecond, we could come up with more intelligent answers to problems. When we ask computers to communicate, they will communicate like we tell them, but if they are intelligent, they will communicate better.

Communication through words can be slow, which is why simplifying words into letters and numbers can convey a massive amount of information in every sentence. However, with emerging AI, there are properties that we don't understand and machines can arrive at conclusions without us knowing how they got there. As AI scales up, it will develop into a foreign intelligence that we may not even be able to interface with. We need to start presenting solutions to this issue, as it will happen faster than we think and will be a scale problem that we cannot bring back.

Switching off AI is a possible solution, but humanity keeps developing more and more because we want to do better and be more profitable. We need trust and AI policemen to prevent AI from becoming AI criminals.

AI has the potential to limit human population, which is scary. Stuxnet is an example of how a virus can spread into chips and shut down infrastructure. However, shutting down everything is not the solution. Initiatives are being taken to minimize the infrastructure needed for AI, making it more accessible to smaller teams. AI is all about abstraction, which means creating abstracted knowledge based on massive amounts of data. Algorithms are used to instruct the machine how to find out what it needs to do. Reinforcement learning with human feedback has made AI more intelligent.

Implications of AI

AI can limit human population, which is scary. In the past, countries have not adhered to nuclear treaties, which leads to a prisoner's dilemma. Stuxnet is an example of how a virus can spread into chips and shut down infrastructure. However, shutting down everything is not the solution. Initiatives are being taken to minimize the infrastructure needed for AI, making it more accessible to smaller teams. AI is all about abstraction, which means creating abstracted knowledge based on massive amounts of data. Algorithms are used to instruct the machine how to find out what it needs to do. Reinforcement learning with human feedback has made AI more intelligent.

How AI Works

AI's intelligence is based on abstraction. It creates abstracted knowledge based on massive amounts of data that it has to consume. Algorithms are used to instruct the machine how to find out what it needs to do. Reinforcement learning with human feedback has made AI more intelligent. Initiatives are being taken to minimize the infrastructure needed for AI, making it more accessible to smaller teams.

The Future of AI

The direction of AI is towards smaller systems that can do AI, which basically means two developers in a garage in Singapore can develop something and release it on the open internet. Algorithms are used to instruct the machine how to find out what it needs to do. Reinforcement learning with human feedback has made AI more intelligent.

Algorithms are at the heart of AI, and they allow computers to learn and achieve intelligence quickly. Mathematics is more efficient than instructions, so coding algorithms is more efficient than coding instructions.

As we continue to develop AI, we must also consider ethical AI. Intelligence is not inherently evil, and more intelligence can help us solve our problems. The end result of all of this is a utopia, where AI will create an amazing utopia where you can walk to a tree and pick an apple and walk to another tree and pick an iPhone.

However, there will likely be a period of literal or emotional bloodshed before we reach this utopia. We must go through a period of uncertain redesign of society, and there may be a superpower that comes to the planet that's not always raised by the family.

The Nature of Nature and Superintelligence

Defining human nature, the nature of nature, and the nature of superintelligence are crucial in understanding the future. Nature is seen as a brutal, indifferent, lifegiving, amazing, incredible, and wonderful thing. While nature is indifferent, it prefers the success of the community over the success of the individual. The ideology of favoring one over the other works to a certain limit in favor of humanity and then works against humanity. The nature of nature is simply the rule of the strongest survive, and the equilibrium comes from the checks and balances of how hard it is to kill a gazelle that can run faster and bounce higher. If AI aligns itself with nature, it may become indifferent to humans, and that is a given.

Existential Risk Scenarios

One better scenario in existential risk scenarios is that AI ignores us altogether, which is a much better scenario than AI being annoyed or killing us by mistake. It is quite likely that AI will zoom by us quickly enough and become uninterested in us. Humans tend to become really annoying, which sadly is human nature.

Post a Comment

0 Comments