I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
> Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
It's theft. But not all IP theft, or theft in general, is morally equivalent. A poor person stealing a loaf of bread or pirating a movie they couldn't afford is just. A corrupt elite stealing poor farmers' food or stealing content from small struggling creators is not.
Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff.
I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.
I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread.
Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.
We are just it's temporary vehicles.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
Yes, these decisions are being made by flesh-and-blood humans at the top of a social pyramid. Nick Land's deranged (and often racist) word-salad sci-fi fantasies tend to obfuscate that. If robots turn on their creators and wipe out humanity then whatever remains wouldn't be a class society or a market economy of humans any more, hence no longer the social system known as capitalism by any common definition.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation
(disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)
the tech bro's want imunity from prosecution, and what is effectivly the right to kill, or a human "take permit", granted, which is the core power of state,
and if granted, will in fact bring down the state itself, privatising everything in one fell swoop, which is unlikely to happen.
see china:, and jack ma,allowed to retire to his wifes, ultra posh london adress.
Archive: https://archive.is/j1XTl
I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
> "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.
Artists are not primarily in the 1% though, it's not only patents that are IP theft.
Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
> Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.
God forbid we protect people from the theft machine
There's a lot of problems with AI that need some carefully thought out regulation, but infringing on rights granted by IP law still isn't theft.
It's theft. But not all IP theft, or theft in general, is morally equivalent. A poor person stealing a loaf of bread or pirating a movie they couldn't afford is just. A corrupt elite stealing poor farmers' food or stealing content from small struggling creators is not.
Ask yourself: who owns the IP you're defending? It's not struggling artists, it's corporations and billionaires.
Stricter IP laws won't slow down closed-source models with armies of lawyers. They'll just kill open-source alternatives.
Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff.
I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale.
Agreed. Regulate AI? Sure, though I have zero faith politicians will do it competently. But more IP protection? Hard pass. I'd rather abolish patents.
I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread.
Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt.
> Could you give some specific examples of AI regulations that you think would be good?
Sure, I can give you some examples:
- deceiving someone into thinking they're talking to a human should be a felony (prison time, no exceptions for corporations)
- ban government/law-enforcement use of AI for surveillance, predictive policing or automated sentencing
- no closed-source AI allowed in any public institution (schools, hospitals, courts...)
- no selling or renting paid AI products to anyone under 16 (free tools only)
But are they really the ones in control?
It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines.
We are just it's temporary vehicles.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
Yes, these decisions are being made by flesh-and-blood humans at the top of a social pyramid. Nick Land's deranged (and often racist) word-salad sci-fi fantasies tend to obfuscate that. If robots turn on their creators and wipe out humanity then whatever remains wouldn't be a class society or a market economy of humans any more, hence no longer the social system known as capitalism by any common definition.
>We are just it's temporary vehicles.
> “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation
I for one welcome our new AI overlords.
(disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.)
the tech bro's want imunity from prosecution, and what is effectivly the right to kill, or a human "take permit", granted, which is the core power of state, and if granted, will in fact bring down the state itself, privatising everything in one fell swoop, which is unlikely to happen. see china:, and jack ma,allowed to retire to his wifes, ultra posh london adress.