12 Comments

Way to bury the lede--congratulations on the PROSE award!!

Expand full comment

Note that making new math or physics (as opposed to solving an artificial problem like the olympic ones) requires a very illegible skillset, and has no clear ground truth, unlike coding. By any chance have you read Polanyi’s books The Tacit Dimension and Personal Knowledge? I think he really hits the nail on the head in this regard

Expand full comment

People have seized on this as a deflationary interpretation of o1-style results, but they generally don't turn around and ask an important question: "but what about humans?"

Making genuinely new math or physics is something that few humans do, and the ones who do, generally do so only a few times, most often once in a lifetime, and when they do, much of the observed payoff is long afterwards, when they are retired or dead. So... how do *humans* learn to do these things and do them at all? If the skill set is illegible to LLMs reading books, how is it legible to humans reading the same books? If the ground truth is so inscrutable and you may never find out if something was a good idea, how do humans learn what are good ideas or how to do things like 'be a historian' given only a few years of historian-specific education/work, containing perhaps 0 novel predictions which could be verified with a ground-truth?

Expand full comment

A third point could be that merely reading books is not enough for gaining full understanding of a topic. This is likely less important now that we have video lectures, code, and other datasets alongside with multimodal models and reinforcement learning from human feedback, but it has been documented (e.g. by Lucio Russo) that scholars in the imperial period had some access to Hellenistic texts but, in the absence of a human teacher, systematically misunderstood them, see Vitruvius claiming that the Sun attracts planets by means of “triangular rays” (likely from a misinterpreted drawing from an ancient text).

Expand full comment

On a completely different note: a year ago I was talking with a guy who works in oil and gas, I believe he was an engineer but turned to a less technical role in management. I asked him what he thought about the impact of AI on his job. His answer was that AI didn’t stand a chance of competing with humans because what is written down in formal documentation is largely cover-your-ass bullshit and actual decisions are made at the dinner table or at the golf club. This is unlikely to apply to science (though some fields are more bullshit heavy than others) but there is still an important social dimension determining e.g. whether a new theory is accepted into the consensus view.

Expand full comment

This is a common cope, but such human cartels and good-old-boy incumbent networks, I think, simply change the dynamics from a steady replacement to, as Hemingway described bankruptcy, "slowly, and then all at once".

Expand full comment
5dEdited

This may well be the case and it should be possible to make a little model, one of those the economists seem to like so much, that reproduces this change of dynamic.

Edit: here it is https://mariopasquato.substack.com/p/a-simple-model-of-humanai-replacement

Expand full comment

Oh, I don't think that's necessary. Economists already have many models of cartel dynamics. Specifically, that they are inherently unstable because each member gains too much from defection. (Antitrust law encodes this by allowing leniency for cartel members who turn in the other cartelists, and increasing damages on the rest, so each member is not just incentivized to cheat on the cartel itself by doing the forbidden thing, but must be fearful that in their prisoner's dilemma, they'll be left playing Cooperate while a fellow has begun playing Defect.) So in the scenario where the AIs are capable of the task but do not get used because there is an 'overhang' maintained by pure human chauvinism, forming a de facto 'cartel' whose members agree to only buy from each other (ie. humans), you have an unstable cartel: the first member to integrate AIs can make huge profits. And I'm pretty sure that people in oil & gas, however much they may like dealing with fellow humans over steak dinners or getting in a few holes, like money even more. (I'm reminded of the digitization and then automation of various stock exchanges and trading floors. The traders preferred to deal with their fellow humans and they collectively tried to resist... but going electronic was just too profitable. So there's still open-outcry some places, but nothing remotely like it used to be.)

Expand full comment

I wonder whether the answer has to do with making new science sharing part of the nature of walking or throwing rocks, which are activities that rely on a model of the world that is hard to express in words? Take this conversation with chatGPT 4.5 as an example (I hope it’s visible to everyone with the link):

https://chatgpt.com/share/67ca0b6a-d2bc-800e-9554-d097e8091dbe

It seems that it’s struggling to understand that choosing the initial conditions to produce a useful trajectory involves… well, actually placing a ball in space.

Expand full comment

It is too simplistic, and a bit lazy, to say "garbage in garbage out" in regard to AI - a shorthand for dismissing it. While what goes in and out can be decent quality for math and coding results, "garbage" can also represent a reader's or viewer's level of knowledge, skepticism, or desire for an answer they simply understand.

Differentiating provable results from those that require intuition or the ability to hold contradictory thoughts at the same time is, as far as I know, mostly not in the purview of AI. Anyone who has conducted in-depth research over a short or long period, in a library or any other way, has experienced what you describe: how connections will be made that might otherwise never occur, not as a random event but only as the result of taking the time with an open mind (whatever that might mean). It takes the time it takes, speed is not a factor. Social skills, such as your friend Felipe exhibited, can be as important as any other - I have experienced exactly that with the help of librarians and it's a big thrill.

I know this is a long comment, but, hey. Also want to mention the journalist John McPhee, whose interview process includes asking people the same question with variations many times over more than a single session. When he gets the same answer four or five times, he knows he's got it.

I'm a professional photographer, involved with video. Art (and writing) require the guidelines your painting teacher set out. My friends, including those who have shot major Hollywood movies, are of course obsessed with AI, but mostly see it as a tool, not a colleague. Sometimes the results are amazing, sometimes wretched; either way, it requires years of experience to develop the taste (intuition) to know if AI has produced something good or even "good enough". Humans are the most important consumers and manipulators of AI, not the machine. It don't know from negative space.

Expand full comment

Thank you that was a brilliant essay. In many ways, AI has forced us to distill what is unique about our craft.

Expand full comment

I think this is very clearsighted, on AI impact on career strategies. I'll skip the poll because I'd be excited to read any one of those!

Expand full comment