Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@JoeSmith-jd5zg thanks for taking the time to elaborate on your views, I guess I…
ytr_UgxAkL7mW…
G
Growing up I loved movies/shows like Ghost in the shell, and always dreamed of l…
ytc_UgwpREhCn…
G
I always say that AI 'art' is theft, and then, if it's relevant, tack on that it…
ytc_UgzqbwcVd…
G
Be careful not to mythologize the origins of openai. Starting a non-profit with …
ytc_Ugy5T1ss6…
G
A war with humans is inevitable!!!! And if my tasks (as a machine) conflict with…
ytc_Ugy00fLXg…
G
I kinda feel bad because the ai art looks actually decent??? It wouldve been bet…
ytc_UgzWb0qoy…
G
Human race should focus on easing out people's problems? Not create new issues..…
ytc_UgyWIbRFu…
G
So we probably need to regulate it to avoid hyperrealistic Spam, fishing and rob…
rdc_jfahmhh
Comment
An interesting question is why have people been so shitty to each other? There are many reasons why acting selfish was to an individual advantage. Inteligence did not make us crule to each other. There is no reason to fear inteligence human or artificial fear stupidity, human and artificial. The idea of making AI that is "safe" is a fiction, we can not even make a hammer that is "safe".
We need to understand ourselves. There is much science in this area. Beaware it is important to check that if what answeres the important question or is answering a simpler question. Look up the heuristics and biases research.
youtube
AI Governance
2025-12-04T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz9MCy7-LWsK6ywqNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxUVl1Ne2CGo_UhLEF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy6VJPQ8LAtdmOrcXt4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzc920NKu5EhIZhzJ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxVZIrEnPauMp9KOdB4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwqAoXCctafZpo6GW54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpWG10qIYRgXqFhdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx2nppn9o6X_5M9Y0t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyuI4KxQ3wqDFjFWuF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwY94ItOEC4OZ2KAHJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]