Last week, Dario Amodei the CEO of Anthropic wrote a 15000 word essay on AI. As the CEO of Anthropic, obviously he is at the forefront of the change that GenAI is ushering and I was curious to understand his perspective on the topic.
That being said, I did not feel like reading a 15000 word essay. I was not sure if I would get through all of it and I was not sure I would understand it. So I decided to turn to GenAI to help me understand the article. So I gave AnyQuest the link to the article and asked it to help me understand it. If anyone is interested in how I went about this or if anyone is interested in understanding this article without reading it you can follow my thoughts here - https://pub.anyquest.ai/anthropic-px5pkx
Since I had no idea what the article was about my first prompt was -
"Please read this article in detail and provide me a framework that I can use to understand all that is written in this article - https://darioamodei.com/machines-of-loving-grace"
I was not sure how to go about coming up with questions to understand the article so this prompt helped me get started.
I then used the framework the AI generated to ask all my follow on questions. Here are some examples of the follow on questions -
Please provide me the Author's background
What is his main argument in the article
What are the risks of AI that he discusses. Can you give an example for each risk
My learning style is to learn by examples. I understand concepts much better when an example is shared, so going forward with every question I'd ask it to share examples. So it wasn't all theoretical.
What are the potential upsides he highlights, please share these with examples
What are examples of positive outcomes, he shares in each of these areas and please share examples - Positive Outcomes: Envisions a future where AI facilitates advancements in: Biology and Health Neuroscience and Mental Health Economic Development and Poverty Alleviation Peace and Governance Work and Meaning
Can you elaborate what constitutes powerful AI, with examples
So at the end of this process, which I enjoyed a lot more as I felt I was having a conversation with the author about the article (than reading an article) what did I learn -
I left feeling quite dispondent. He talks about the need for -
Proactive Engagement in AI Safety
Collaboration Across Disciplines
Establishment of Ethical Guidelines
Public Awareness and Education
Regulatory Frameworks for AI
Focus on Long-Term Implications
I think all the tech leaders at the forefront of AI are talking about the need for the above. They know this is going to be significantly impactful. That being said, we live in a world where it is getting harder for people to have conversations. The US is bitterly divided. People want to believe what they want to believe. Facts are boring, stories capture our imagination. Large groups of people are suspicious about science. Big corporate has lost the trust of people. Government entities have lost the trust of people. News outlets have los the trust of people. The world is divided. In this environment, how will people agree on what to do about AI safety? How will we get collaboration across disciplines? How will we establish ethical guidelines - who's ethics? Who get's to decide what are the right and wrong ethics? How does a public that learns through TikTok and Tweets get awareness? Who makes them aware?
Don't mean to be a downer but one of the most disruptive technologies is landing amongst us and there are no leaders people trust to make decisions for them.