Robin II #2

Jul. 19th, 2025 02:19 pm
iamrman: (Buggy)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Chuck Dixon

Pencils: Tom Lyle

Inks: Bob Smith


Just what is the Joker planning? Robin certainly doesn’t know.


Read more... )

Richard Dragon: Kung Fu #9

Jul. 19th, 2025 12:06 pm
iamrman: (Marin)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Dennis O’Neil

Pencils and inks: Ric Estrada


Richard Dragon and his supporting cast are asked to investigate reports of a mantis-man threatening the tourist trade in the Caribbean.


Read more... )

Mister Miracle #16

Jul. 18th, 2025 02:30 pm
iamrman: (Bon Clay)
[personal profile] iamrman posting in [community profile] scans_daily

Words and pencils: Jack Kirby

Inks: Mike Royer


Shilo Norman takes on the insect hoards of Professor Egg. (Unfortunately, not another form of Egg-Fu.)


Read more... )

Ka-Zar #7

Jul. 18th, 2025 12:35 pm
iamrman: (Squirrel Girl)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Mark Waid

Pencils: Andy Kubert

Inks: Jesse Delperdang


The Plunderer's goons are on the hunt for Shanna.


Read more... )

Justice League of America #254

Jul. 18th, 2025 10:30 am
iamrman: (Franky)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Gerry Conway

Pencils: Luke McDonnell

Inks: Bill Wray


Despero has defeated the veteran heroes, so it comes down to the rookies to save the day.


Read more... )

Incredible Hulk #163

Jul. 17th, 2025 05:53 pm
iamrman: (Marin)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Steve Englehart

Pencils: Herb Trimpe

Inks: Sal Trapani


The Hulk stumbles upon a secret Soviet base underneath the Arctic.


Read more... )

sometimes, I think of ponies

Jul. 17th, 2025 08:43 am
solarbird: (korra-on-the-air)
[personal profile] solarbird

Have you ever noticed that every projection about “AGI” and “superintelligence” has an “and then a miracle occurs” step?

I have.

I shouldn’t say every projection – there are many out there, and I haven’t seen them all. But every one I’ve personally seen has this step. Somewhere, sometime, fairly soon, generative AI will create something that triggers a quantum leap in capability. What will it be? NOTHING MERE HUMANS CAN UNDERSTAND! Oh, sometimes they’ll make up something – a new kind of transistor, a new encoding language (like sure, that’ll do it), whatever. Sometimes they just don’t say. Whatever it is, it happens, and then we’re off to the hyperintelligent AGI post-singularity tiems.

But the thing is … the thing is … for Generative AI to create a Magic Something that Changes Everything – to have this miracle – you have to already have hyperintelligent AGI. Since you don’t… well…

…that’s why it’s a miracle. Whether they realise it or not.

I’m not sure which is worse – that they do realise it, and know they’re bullshitting billions of dollars away from productive society to build up impossible wealth before the climate change they’re helping make worse fucks everything so they can live like feudal kings from their bunkers, or whether they don’t, and are spirit dancing, wanking off technofappic dreams of creating a God who will save the world with its AI magic, a short-term longtermism, burning away the rest of the carbon budget in a Hail Mary that absolutely will not connect.

Both possibilities are equally batshit insane, I know that much. To paraphrase a friend who knows far more about the maths of this than I, all the generative AI “compute” in the universe isn’t going to find fast solutions to PSPACE-HARD problems. It’s just not.

And so, sometimes, sometimes, sometimes, I think of…

…I think of putting a short reading/watching list out there, a list that I hesitate to put together in public, because the “what the actual fuck” energies are so strong – so strong – that I can’t see how anyone could take it seriously. And yet…

…so much of the AI fantasia happening right now is summed by three entirely accessible works.

Every AI-fantasia idea, particularly the ideas most on the batshit side…

…they’re all right here. And it’s all fiction. All of it. Some of it is science-shaped; none of it is science.

But Alice, you know, we’re all mad here. So… why not.

Let’s go.

1: Colossus: The Forbin Project (1970)

This is the “bad end” you see so much in “projections” about AI progression. A new one of these timelines just dropped, they have a whole website you can play with. I’m not linking to it because why would I, holy shit, I don’t need to spread their crazy. But there’s a point in the timeline/story that they have you read – I think it’s in 2027 – when you can make a critical choice. It’s literally a one-selection choose-your-own-path adventure!

The “good” choice takes you to galactic civilisation managed by friendly hyperintelligent AGI.

The “bad” choice is literally the plot of The Forbin Project with an even grimmer ending. No, really. The beats are very much the same. It’s just The Forbin Project with more death.

Well. And a bioweapon. Nukes are so messy, and affect so much more than mere flesh.

2: Blindsight, by Peter Watts (2006)

This rather interesting – if bleak – novel presents a model of cognition which lays out an intriguing thought experiment, even if it … did not sit well with what I freely admit is my severely limited understanding of cognition.

(It is not helped that it directly contradicts known facts about the cognition of self-awareness in various animals, and did so even when it was published. That doesn’t make it a worse thought experiment, however. Or a worse novel.)

It got shortlisted – deservedly – for a bunch of awards. But that’s not why it’s here. It’s here because its model of cognition is functionally the one used by those who think generative AI and LLMs can be hyperintelligent – or even functionally intelligent at all.

And it’s wrong. As a model, it’s just wrong.

Finally, we get to the “what.” entry:

3: Friendship is Optimal, by Iceman (2012)

Friendship is Optimal is obviously the most obscure of these works, but also, I think maybe the most important. It made a big splash in MLP fandom, before landing like an absolute hand grenade in the nascent generative AI community when it broke containment. Maybe not in all of that latter community – but certainly in the parts of which I was aware. So much so, in fact, that it made waves even beyond that – which is when I heard of it, and how I read it.

And yes… it’s My Little Pony fanfic.

Sorta.

It’s that, but really it’s more an explicit AI takeoff story, one which is absolutely about creating a benevolent hyperintelligent Goddess AI construct who can, will, and does remake the world, destroying the old one behind her.

Sound familiar?

These three works include every idea behind every crazy line of thought I’ve seen out of the Silicon Valley AI crowd. These three works right here. A novel or a movie (take your choice, the movie’s quite good, I understand the novel is as well), a second novel, and a frankly remarkable piece of fanfic.

For Musk’s crowd in particular? It’s all about the model presented in Friendship is Optimal, except, you know, totally white supremacist. They’re even kinda following the Hofvarpnir Studios playbook from the story, but with less “licensed property game” and a lot more more “Billionaire corporate fascism means you don’t have to pay employees anymore, you can just take all the money yourself.”

…which is not the kind of sentence I ever thought I’d write, but here we are.

You can see why I’m hesitant to publish this reading list, but I also hope you can see why I want to.

If you read Friendship is Optimal, and then go look at Longtermerism… I think you definitely will.

So what’re we left with, then?

Some parts of this technology are actually useful. Some of it. Much less than supports the valuations, but there’s real use here. If you have 100,000 untagged, undescribed images and AI analysis gives 90% of them reasonable descriptions, that’s a substantial value add. Some of the production tools are good – some of them are very good, or will be, once it stops being obvious that “oh look, you’ve used AI tools on this.” Some of the medical imaging and diagnostic tools show real promise – though it’s always important to keep in mind that antique technologies like “Expert Systems” seemed just as promising, in the lab.

Regardless, there’s real value to be found in those sorts of applications. These tasks are where it can do good. There are many more than I’ve listed, of course.

But AGI? Hyperintelligence? The underlying core of this boom, the one that says you won’t have to employ anyone anymore, just rake in the money and live like kings?

That entire project is either:

A knowing mass fraud inflating a bubble nobody’s seen in a century that instead of breaking a monetary system might well finish off any hopes for a stable climate in an Enron-like insertion of AI-generated noise followed by AI-generated summarisation of that noise that no one reads and serves no purpose and adds no value but costs oh, oh so very much electricity and oh, oh, oh so very much money;

A power play unlike anything since the fall of the western Roman empire, where the Church functionally substituted itself in parallel to and substitute of of the Roman government to the point that the latter finally collapsed, all in service of setting up a God’s Kingdom on Earth to bring back Jesus, only in this case, it’s setting up the techbro billionaires as a new nobility, manipulating the hoi polloi from above with propaganda and disinformation sifted through their “AI” interlocutors;

Or an absolute psychotic break by said billionaires and fellow travellers so utterly unwilling and utterly unable to deal with the realities of climate change that they’ll do anything – anything – to pretend they don’t have to, including burning down the world in the service of somehow provoking a miracle that transcends maths and physics in the hope that some day, some way, before it’s too late, their God AI will emerge and make sure everything ends up better… in the long term.

Maybe, even, it’s a mix of all three.

And here I thought my reading list was the scary part.

Silly me.

Posted via Solarbird{y|z|yz}, Collected.

Hawk and Dove #5

Jul. 17th, 2025 02:31 pm
iamrman: (Sindr)
[personal profile] iamrman posting in [community profile] scans_daily

Writers: Barbara and Karl Kesel

Pencils: Rob Liefeld

Inks: Karl Kesel


Hawk and Dove travel to the Chaos Realm to fix Kestrel for good.


Read more... )

Guy Gardner: Warrior #25

Jul. 17th, 2025 12:31 pm
iamrman: (Mooreen)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Beau Smith

Pencils: Mitch Byrd

Inks: Dan Davis


The Phantom Stranger aids Guy in his fight against a creature named Dementor that just won’t shut up.


Read more... )

Green Lantern #189

Jul. 17th, 2025 10:31 am
iamrman: (Power)
[personal profile] iamrman posting in [community profile] scans_daily

Writer: Steve Englehart

Pencils: Joe Staton

Inks: Bruce Patterson


Hal and Carol visit a long-forgotten friend.


Read more... )

Profile

ysabel: (Default)
Ysabel

May 2011

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 24th, 2025 06:07 pm
Powered by Dreamwidth Studios