If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.
And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.
That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a “person” under the law.
Several years ago I read an article that went in to great detail on how LLMs are perfectly poised to replace C-levels in corporations. I went on to talk about how they by nature of design essentially do the that exact thing off the bat, take large amounts of data and make strategic decisions based on that data.
I wish I could find it to back this up, but regardless ever since then, I’ve been waiting for this watershed moment to hit across the board…
They… don’t make strategic decisions… That’s part of why we hate them no? And we lambast AI proponents because they pretend they do.
The funny part is that I can’t tell whether you’re talking about LLMs or the C-suite.
Buddam tsssss! I too enjoy making fun of big business CEOs as mindless trend-followers. But even “following a trend” is a strategy attributable to a mind with reasoning ability that makes a choice. Now the quality of that reasoning or the effectiveness of that choice is another matter.
As tempting as it is, dehumanizing people we find horrible also risks blinding us to our own capacity for such horror as humans.
I think you’re getting caught up in semantics.
“Following a trend” is something a series of points on a grid can do.
Y’know, the whole “don’t dehumanize the poor biwwionaiwe’s :(((” works for like, nazis, because they weren’t almost all clinical sociopaths.
Lol the point about “don’t dehumanize” has nothing to do about them or feeling bad for them. They can fuck right off. It’s about us not pretending these aren’t human monsters, as if being human makes us inherently good, as if our humanity somehow makes us inherently above doing monstrous things. No, to be human is to have the capacity for doing great good and for doing the monstrously terrible.
Nazis aren’t monsters because they’re inhuman, they’re monsters because of it. Other species on the planet might overhunt, displace, or cause depopulation through inadvertent ecological change, but only humanity commits genocide.
They do indeed make strategic decisions, just only in favor of the short term profits of shareholders. It’s “strategy” that a 6 yr old could execute, but strategy nonetheless
This is closer to what I mean by strategy and decisions: https://matthewdwhite.medium.com/i-think-therefore-i-am-no-llms-cannot-reason-a89e9b00754f
LLMs can be helpful for informing strategy, and simulating strings of words that may can be perceived as a strategic choice, but it doesn’t have it’s own goal-oriented vision.
Oh sorry I was referring to CEOs
XD
I’d argue they do make strategic decisions, its just that the strategy is always increasing quarterly earnings and their own assets.
You’re right. But then look at Musk. if anyone was ripe for replacement with AI, it’s him.
yet…
Sure, but that true AI won’t just involve an LLM, it will be a complex of multi-modal models with specialization and hierarchy–thats basically what big AIs like GPT-5 are doing.
That’s part of why we hate them no?
Hate isn’t generally based on rational decision making.
Its inevitable.
Y’all are all missing the real answer. CEOs have class solidarity with shareholders. Think about about how they all reacted to the death of the United health care CEO. They’ll never get rid of them because they’re one of them. Rich people all have a keen awareness of class consciousness and have great loyalty to one another.
Us? We’re expendable. They want to replace us with machines that can’t ask for anything and don’t have rights. But they’ll never get rid of one of their own. Think about how few CEOs get fired no matter how poor of a job they do.
P.S. Their high pay being because of risk is a myth. Ever heard of a thing called the golden parachute? CEOs never pay for their failures. In fact when they run a company into the ground, they’re usually the ones that receive the biggest payouts. Not the employees.
Loyalty lasts right up until the math says otherwise.
The math has never made sense for CEOs
One must include social capital in the math
Wouldn’t they just remove the CEO from their role and they would just become another rich shareholder?
> company gets super invested in AI.
> replaces CEO with AI.
> AI does AI stuff, hallucinaties, calls for something inefficient and illegal.
> 4 trillion investor dollars go up in flames.
> company goes under, taking AI hype market down with itAnd nothing of value will be lost.
deleted by creator
Companies never outsourced the CEO position to countries which traditionally have lower CRO salaries but plenty of competency (e.g. Japan), so they won’t do this either. It’s because CEOs are controlled by boards, and the boards are made up of CEOs from other companies. They have a vested interest in human CEOs with inflated salaries.
Should be way easier to replace a CEO. No need for a golden parachute, if the AI fails, you just turn it off.
But I’d imagine right now you have CEOs being paid millions and using an AI themselves. Worst of both worlds.
AI? Yes probably. Current AI? No. I do think we’ll see it happen with an LLM and that company will probably flop. Shit how do you even prompt for that.
It’ll take a few years but it progresses exponentially, it will get there.
It progresses logistically; eventually it’ll plateau and there’s no reason to believe that plateau will come after “can do everything a human can.”. See: https://www.promptlayer.com/research-papers/have-llms-hit-their-limit
Sure, but we don’t know where that plateau will come and until we get close to it progress looks approximately exponential.
We do know that it’s possible for AI to reach at least human levels of capability, because we have an existence proof (humans themselves). Whether stuff based off of LLMs will get there without some sort of additional new revolutionary components, we can’t tell yet. We won’t know until we actually hit that plateau.
Current Ai has no shot of being as smart as humans, it’s simply not sophisticated enough.
And that’s not to say that current llms aren’t impressive, they are, but the human brain is just on a whole different level.
And just to think about on a base level, LLM inference can run off a few gpus, roughly order of 100 billion transistors. That’s roughly on par with the number of neurons, but each neuron has an average of 10,000 connections, that are capable of or rewiring themselves to new neurons.
And there are so many distinct types of neurons, with over 10,000 unique proteins.
On top of there over a hundred neurotransmitters, and we’re not even sure we’ve identified them all.
And all of that is still connected to a system that integrates all of our senses, while current AI is pure text, with separate parts bolted onto it for other things.
The human brain is doing a lot of stuff that’s completely unrelated to “being intelligent.” It’s running a big messy body, it’s supporting its own biological activity, it’s running immune system operations for itself, and so forth. You can’t directly compare their complexity like this.
It turns out that some of the thinky things that humans did with their brains that we assumed were hugely complicated could be replicated on a commodity GPU with just a couple of gigabytes of memory. I don’t think it’s safe to assume that everything else we do is as complicated as we thought either.
Yeah a lot of it is messy, but they are not being replicated by commodity gpus.
LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it. Just read claudes system prompt and tell me it’s still smart, when it needs to be told 4 separate times to avoid copyright.
LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it.
Hm… two bucks… and it only transports matter? Hm…
It’s amazing how quickly people dismiss technological capabilities as mundane that would have been miraculous just a few years earlier.
Current Ai has no shot of being as smart as humans, it’s simply not sophisticated enough.
you know what’s also not very sophisticated? the chemistry periodic table. yet all variety of life (of which there is plenty) is based on it.
At its face value, base elements are not enormously complicated. But we can’t even properly model any element other than hydrogen, it’s all approximations because quantum mechanics is so complicated. And then there’s molecules, that are even more hopelessly complicated, and we haven’t even gotten to proteins! By comparison our best transistors look like toys.
Ive had too many beers to read that.
All of you are missing the point.
CEOs and The Board are the same people. The majority of CEOs are board members at other companies, and vice-versa. It’s a big fucking club and you ain’t in it.
Why would they do this to themselves?
Secondly, we already have AI running companies. You think some CEOs and Board Members aren’t already using this shit bird as a god? Because they are
They would do it because the big investors–not randos with a 401k in an index fund, but big hedge funds–demand that AI leads the company. This could potentially be forced at a stockholder meeting without the board having much say.
I don’t think it will happen en masse for a different reason, though. The real purpose of the CEO isn’t to lead the company, but to take the fall when everything goes wrong. Then they get a golden parachute and the company finds someone else. When AI fails, you can “fire” the model, but are you going to want to replace it with a different model? Most likely, the shareholders will reverse course and put a human back in charge. Then they can fire the human again later.
A few high profile companies might go for it. Then it will go badly and nobody else will try.
I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.
Right now the technology doesn’t seem there yet, but it has been rapidly improving, so we’ll see.
I could definitely see rich CEOs funding the creation of a “celebrity” bot that answers questions the way they do. Maybe with their likeness and voice, so they can keep running companies from beyond the grave. Throw it in one of those humanoid robots and they can keep preaching the company mission until the sun burns out.
What a nightmare.
Check out the novel Accelerando by Charles Stross, that thing is part of the plot.
Thanks for the suggestion, I’ll check it out!
I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.
Perhaps we could have it sell Paperclips. With the sole goal of selling as many paperclips as possible.
Surely, selling something as innocuous as paperclips could never go wrong.
Certainly the CEOs will patiently ensure guardrails are in place before chasing a ROI. Right? … Right?
Uh oh…
I have been having this vision you described for quite some time now.
As time progresses, availability of resources on earth increases because we learn to process and collect them more efficiently; but on the other hand, number of jobs (or, demand for human labor) decreases continuously, because more and more work gets automated.
So, if you’d draw a diagram, it would look something like this:
X-axis is time. As we progress into the future, that completely changes the game. Instead of being a society that is driven by a constant shortage of resources and a constant lack of workers (causing a high demand for workers and a lot of jobs), it’d be a society with a shortage of jobs (and therefore meaningful employment), but with an abundance of resources. What do we do with such a world?
in all dialectical seriousness, if it appeases the capitalists, it will happen. “first they came with ai for the help desk…” kind of logic here. some sort of confluence of Idiocracy and The Matrix will be the outcome.
Love that term dialectical seriousness, have to admit i had to look it up :)
You mean dialectical whimsiness
No, because someone has to be the company’s scapegoat… but if the ridiculous post-truth tendencies of some societies increase, then maybe “AI” will indeed gain “personhood”, and in that case, maybe?
I don’t see any other future.
Wasn’t it Willy Shakespeare who said “First, kill all the Shareholders” ? That easily manipulated stock market only truly functions for the wealthy, regardless of harm inflicted on both humans and the environment they exist in.
Sadly don’t think this is going to happen. A good CEO doesn’t make calculated decisions based on facts and judge risk against profit. If he did, he would, at best, be a normal CEO. Who wants that? No, a truly great CEO does exactly what a truly bad CEO does; he takes risks that aren’t proportional to the reward (and gets lucky)!
This is the only way to beat the game, just like with investments or roulette. There are no rich great roulette players going by the odds. Only lucky.
Sure, with CEOs, this is on the aggregate. I’m sure there is a genius here and a Renaissance man there… But on the whole, best advice is “get risky and get lucky”. Try it out. I highly recommend it. No one remembers a loser. And the story continues.
Well you will be happy to hear that AI does make calculated risks but they are not based on reality so they are in fact - risks.
You can’t just type “Please do not hallucinate. Do not make judgement calls based on fake news”
I’m not sure quite how it relates to what I said. Maybe we are looking at the word risk differently. Let me give an easy example that shows what I think normally is hidden because of complexity.
Five CEOs are faced with the same opportunity to invest heavily in a make or break deal. They either succeed or they go bus, iif they do it. This investment, for one reason or another, only have one winner (because we are simplifying a complex real world problem). All five CEOs invest, four go bust and one wins big. In this simplified example, the one winning CEO would be seen as a great CEO. After all, he did great. The reasonable decision would have been to not invest, but that doesn’t make you a great CEO that can move on to better, greener jobs or cash out huge bonuses. No-one remembers the reasonable CEO that made expected gains without unneeded risks.
If AI ends up running companies better than people, won’t shareholders demand the switch?
Yes. It might be unorthodox at first, but they could just take a vote, and poof, done.
And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.
Wat?
No. What?
So you just used circular logic to make the AI a “person”… maybe you’re saying once it is running the corporation, it is the corporation? But no.
Anyway, corporations are “considered people” in the US under the logic that corporations are, at the end of the day, just collections of people. So you can, say, go to a town hall to voice your opinion as an individual. And you can gather up all your friends to come with you, and form a bloc which advocates for change. You might gain a few more friends, and give your group a name, like “The Otter Defence League.” In all these scenarios, you and others are using your right to free speech as a collective unit. Citizens United just says that this logic also applies to corporations.
That means CEOs would eventually have to replace themselve
CEOs wouldn’t have to “replace themselves” any more than you have to find a replacement if your manager fires you from Dairy Queen.
their decisions all seem very formulaic, and they could definitely be made faster if you just typed it into ChatGPT
?
I asked ChatGPT about 2 years ago to write a response from a CEO explaining that while it was their best financial year, they would be eliminating my department along with 500 workers, and no bonuses for the rest. It pumped out a nearly identical response as my CEO.
Subject: Q2 2023 Quarterly Update: Record-breaking Profits, Strategic Realignments, and Compensation Adjustments
Dear esteemed stakeholders,
I am thrilled to present to you the second-quarter update, brimming with noteworthy achievements and strategic developments that have significantly impacted our organization’s performance. Despite operating in a dynamic and highly competitive landscape, we have delivered outstanding financial results that surpass any previous decade’s records. I am excited to share these details and provide insight into the exceptional growth we have experienced in recent months.
Financial Highlights:
In Q2 2023, we witnessed a meteoric rise in our profitability, with net earnings reaching unprecedented heights, outperforming any comparable period over the past ten years. The diligent efforts of our teams, coupled with robust market conditions, synergistic acquisitions, and optimized cost structures, have paved the way for this remarkable achievement.
Strategic Realignment:
Our success can be primarily attributed to our relentless pursuit of strategic realignment initiatives across multiple fronts. By strategically refocusing our core business lines and leveraging our distinctive competencies, we have effectively capitalized on emerging market trends while fortifying our position as an industry leader.
Our investments in cutting-edge technology and digital transformation initiatives have empowered us to unlock new opportunities and drive operational efficiency. The implementation of data-driven analytics has yielded insightful decision-making capabilities, enabling us to optimize resource allocation and enhance overall productivity.
Product Portfolio Optimization:
Our product portfolio underwent a comprehensive review during Q2, resulting in strategic pruning and refocusing efforts. By prioritizing high-growth areas with maximum revenue potential and aligning our offerings with evolving customer needs, we have ensured a sharper competitive edge and amplified market penetration. This proactive approach has allowed us to streamline our operations while effectively positioning ourselves as a provider of innovative solutions.
Employee Recognition and Compensation Adjustments:
Regrettably, amidst these commendable financial achievements, it is imperative that we address a necessary adjustment to our compensation policies. While profits have soared, we have made the difficult decision not to award annual bonuses to our employees this year. This choice was made to safeguard the long-term sustainability and growth of our organization, considering the dynamic market conditions and the need for prudent financial management.
It is important to emphasize that this decision was not taken lightly, and we remain deeply committed to the well-being and development of our employees. Alternative avenues, such as performance-based incentives and recognition programs, will be explored to ensure ongoing motivation and engagement within our workforce. We firmly believe that nurturing a conducive work environment, coupled with career advancement opportunities, will continue to foster a culture of excellence and drive collective success.
Looking Forward:
Moving forward, we will remain steadfast in our commitment to driving sustainable growth, capitalizing on emerging market opportunities, and nurturing a resilient organizational culture. Our strategic initiatives will continue to prioritize innovation, operational excellence, and customer-centricity, ensuring our ability to adapt and thrive in an ever-changing business landscape.
I extend my deepest gratitude to each member of our organization, whose unwavering dedication and relentless pursuit of excellence have contributed to our resounding success. Together, we will navigate the evolving market dynamics and deliver sustainable value to our stakeholders.
Thank you for your continued support.
Sincerely,
[Your Name] CEO, [Company Name]
It’s scary how much this reads like the emails I receive from $corporate_management
Try doing it now and compare it and see what it does differently.
Isn’t this sorta paradoxical? Like either ceos are actually worth what insane money they make, or a palm pilot could replace them, but somehow they are paid ridiculous amounts for…. What?
No, it’s not paradoxical. You are conflating time points.
I won’t debate the “value” of CEOs, but in this system, their value is subject to market conditions like any other. Human computers were valued much more before electrical computers were created. Aluminum was worth more than gold before a fast and cheap extraction process was invented.
You could not replace a CEO with a Palm pilot 10 years ago.
I guess I was being a bit over the top, the CEOs are the capitalists. I guess it’s possible they are doing their job with LLMs now, but just behind the scenes. Like, either they are worth what they are paid, or the system is broken AF and it doesn’t matter.
I just don’t see them being replaced in any meaningful way.
CEOs may not be the capitalists at the top of a particular food chain. The shareholding board is, for instance. They can be both but there are plenty of CEO level folks who could, with a properly convinced board, be replaced all nimbly bimbly and such.
I guess, but they sure shovel plenty of money at say… Musk. So what? Is he worth a trillion? It seems the boards could trim a ton of money if ceos did nothing. Or they do lots and it’s all worth it. Who’s to say.
I just don’t see LLMs as the vehicle to unseat CEOs, or maybe I’m small minded idk.
Musk is a shareholder. He own large parts of the companies he’s the CEO of