Hey, wait a minute... What if it's not Google that's the problem. What if...it's the entire industry that made Google think it's a good idea to put AI front and center? What if...it's Sam Altman and generative AI itself?
That would be super uncomfortable for someone like, you know, Paul Graham.
Today, in violation of federal law, OpenAI's security physically shoved me out the door because I requested a public record: the non-profit's most recent tax return. Fortunately I wasn't hurt. The company also threatened me with criminal charges.
The United States is a country where the government can't auto-fill your tax return or cause a web browser to perform basic addition of numbers, but millions of people—especially the media—still hero worship white males who say that what we need is more AI-driven uncertainty going into an election.
Well, I've tested out both ChatGPT and Bing AI. I asked both if they could tell me which Harvard residential house had a blue dome.
ChatGPT told me "Mather House," which is made of concrete and has no dome at all.
Bing AI told me "Kirkland House," which does have a dome, but it's shiny and gold. Bing also said it couldn't show me a picture of the blue Kirkland dome when I asked, but offered a URL, and then before I could click it erased the entire message and told me it wasn't allowed to discuss such things and demanded I change to a new topic. ???
So...basically all of this AI hype is a massive joke. This is a question that has one right answer and a lot of wrong ones. It's not a particularly difficult computer science problem to identify blue, and with some image processing magic, I'm sure it's possible to identify a dome from a limited set of pictures. Yet both failed. If these companies had released an AI chatbot that said 2 +2 = 7, would we and should we be impressed? Because that's what they basically just did.