Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some guy commented
“The solution is easy: make the AI think humans are cute. Af…
ytc_UgxdyTHwU…
G
i doubt any real person ever made me so angry and disturbed me so much ... inclu…
ytr_Ugz7ZwXLr…
G
I wonder if AI will ever be able to solve the human aging process and allow huma…
ytc_Ugz73lYDE…
G
No, “AI scientists” don’t think there’s a monster inside ChatGPT… losers on the …
ytc_UgwCgsS0c…
G
He said something very key. They solved the problem of immorality, their whole g…
ytc_UgytDhtVo…
G
Very weak arguments against UBI. 1.) Human labor is inefficient and cannot be im…
ytc_Ugx4BDYa8…
G
What did you expect, he is a sociopath and well let's not forget about the Epste…
ytr_UgySp_QqL…
G
Imagine giving them a pencil, eraser and a paper to recreate their drawing in re…
ytc_UgwUG-mwj…
Comment
Eliezer Yudkowsky says the same thing. He’s been in AI safety research for 20 years. He said in 2015 there was some A.I. conference and Elon and others with money were there but he said he gave up on elon being able to or wanting to do anything to slow it down. Eliezer seems to have given up. Almost quit work. He says anyone in AI should find another job. There’s almost no way to stop this but we can maybe slow it down and enjoy our last few years.
youtube
AI Governance
2023-03-22T04:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzAhrSdO4H44TncECN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw6RKdGMnKrpu9pF3N4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJDZwx0pbwzqE8Xpd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzud7eaKgR1e0rDrTh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw0HhRTSw1E29mV_DV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2tNJpfzbTqfT8X0B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzcwhO9eRFBxcgvNbF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5hl5uVhwVhess-cZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeTtmM3Rne0CDgi8d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwpgFHiNT0e9p1D4jJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]