This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
51
Have LLMs Generated Novel Insights?
Q
Abram Demski
,
Cole Wyeth
,
Kaj Sotala
4mo
Q
19
42
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
3y
Q
3
27
What convincing warning shot could help prevent extinction from AI?
Q
Charbel-Raphael Segerie
,
Diego Dorn
,
Peter Barnett
1y
Q
2
40
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
3y
Q
13
69
Why is o1 so deceptive?
Q
Abram Demski
,
Sahil
9mo
Q
14
Recent Activity
51
Have LLMs Generated Novel Insights?
Q
Abram Demski
,
Cole Wyeth
,
Kaj Sotala
4mo
Q
19
42
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
3y
Q
3
27
What convincing warning shot could help prevent extinction from AI?
Q
Charbel-Raphael Segerie
,
Diego Dorn
,
Peter Barnett
1y
Q
2
7
Egan's Theorem?
Q
johnswentworth
5y
Q
7
40
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
3y
Q
13
14
Is weak-to-strong generalization an alignment technique?
Q
cloud
4mo
Q
1
9
What is the most impressive game LLMs can play well?
Q
Cole Wyeth
5mo
Q
8
4
How counterfactual are logical counterfactuals?
Q
Donald Hobson
6mo
Q
9
16
Are You More Real If You're Really Forgetful?
Q
Thane Ruthenis
,
Charlie Steiner
7mo
Q
4
6
Why not tool AI?
Q
smithee
,
Ben Pace
6y
Q
2
69
Why is o1 so deceptive?
Q
Abram Demski
,
Sahil
9mo
Q
14
7
Is there any rigorous work on using anthropic uncertainty to prevent situational awareness / deception?
Q
David Scott Krueger
9mo
Q
5