- The Doc Market
- Posts
- What I learned from hanging out with Sam Altman this week
What I learned from hanging out with Sam Altman this week
Creator of OpenAI shares some insights about what’s next

"We are in the Sci-Fi level now," Sam Altman said casually. I was at an invite-only hackathon, held by OpenAI, for a small group of Y Combinator founders. We were there to push AI's boundaries. Sam, OpenAI's CEO, shared mind-blowing insights.
A hackathon is a short, intense event where teams quickly create and test new tech ideas to solve real-world problems.
The atmosphere was electric. Sam was revealing the latest in world-changing tech. Founders were building cutting-edge tools in healthcare, finance, engineering, and education. It felt like being at the center of the world.
OpenAI is now valued at $147 billion. Sam Altman might end up with a stake worth over $10 billion, though he denies this. He says he remains committed to building safe AI. For full disclosure, I should mention that I've invested in OpenAI in their previous round.
Sam told us that the AI versions we see today were actually developed two years ago. They've only been released after extensive testing. It's reasonable to assume they're already working on versions we won't see for another couple of years.
Sam predicted several things. First off, he thinks that there will be a steep improvement curve in AI capabilities going forward.
He says future AIs will completely crush existing intellectual benchmarks, like the USMLE or Math Olympiads. Remember, GPT was at the intellectual level of a pre-schooler just 4 years ago. Which has created an unexpected problem: humans can't create tests hard enough to challenge these newer AIs.
We asked him how long it would take for that to happen, he said maybe 1-3 years.
Even more startling, Sam estimates artificial general intelligence - the conscious, sci-fi level AI - will be here within the next 5 years.
And the threat of human-level AI replacement? Possibly within a decade.
Why continue building if there is a possible existential risk to humans? Sam believes that it is inevitable someone will build it. So his goal is for his team to get there first and make sure they can put the guardrails in place before a bad actor creates it without safety measures.
It’s a classic case of The Prisoner’s Dilemma. The Prisoner's Dilemma is a game theory scenario where people must decide whether to cooperate for the good of everyone or pursue self-interest to the detriment of the group, without knowing the other's decision. Most end up pursuing self-interest out of mistrust.
In an AI arms race, with unknown competitors, the pressure to advance quickly is immense, even if it might not be in everyone's best interest.
Most AI researchers don’t think its likely that AI will result in the Terminator. But nearly 58% surveyed do believe there is at least a 5% chance that AI will threaten human existence. Similarly, when the atomic bomb was being developed, a minority of nuclear engineers thought it was possible it would ignite the atmosphere. They went ahead anyways of course. So this is not our first time playing with extinction.
I remain less worried about that and more about the practical implication it will have on jobs. And this worry has only heightened since the hackathon.
Sam believes the biggest risk is human unpreparedness. That's partly why they're releasing these technologies quickly, to allow people to adapt and normalize these new capabilities. He said the things that are coming in a few years will seem crazy to us now, but will be normal when we get there. Like how ChatGPT feels normal now, despite it being able to do things that would’ve seemed like magic a few years ago.

The hackathon produced some amazing projects in the span of 24 hours. One of the winners was an AI healthcare company company being built by MIT engineers. Their startup is an AI that lets doctors access the latest evidence-based recommendations. It’s one of the best ideas I’ve seen from a non-medical team. In fact, we had built a prototype of this same product 2 years ago. We didn’t have enough resources to keep working on it at the time. And I doubt we could’ve made it as good as these guys have.
In the hackathon, their team built a really compelling tool. They built a live AI intensivist consultant. Any healthcare provider can call it on the phone 24/7 to get help on a case. It was seamless and sounded fully human, responding instantaneously. It would be very hard to tell the difference between the AI and a real intensivist if you walked by. Moreover, this intensivist can speak over 100 languages. Imagine how much it can help underserved parts of the world.
Another winner created an AI mechanical engineer that could work with 3d modeling tools. They had it design airplane wings to certain specifications that could be sent to production. And it did a series of tasks that would take 15 mechanical engineers about 3 hours to perform in just 6 minutes.
OpenAI is giving my startup early access to their upcoming version a few months before public release. We are excited to be among the first to use it build tools that can help doctors practice better medicine. Our latest AI app is helping clinics offer lifestyle medicine, improving patient outcomes and generating more revenue without additional work. You can sign up for our waitlist here.
We are already thinking of the ways we can take it a step further with the newest AI version we just previewed, and all the ideas we got from the other projects. Feel free to email me if you want to test our latest products or want to get involved in tinkering with them.
What tedious or annoying parts of medicine do you wish AI can take care of for you, so that you can focus on the parts you enjoy?
Best,
Mohammed