Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI, like Sophia here, is designed to process information and make decisions base…
ytr_UgwJsKbye…
G
AI sources it's "knowledge" over what it finds in media, since media is already…
ytc_UgwzQMy0I…
G
Pure silliness. Take the example of AI taking over the podcasting role. No it ca…
ytc_UgzZu3SHd…
G
Elon did not say anti woke ai, he said maximum thruth seeking ai, which should b…
ytc_UgxEo60Ym…
G
Nah I'm against AI as a concept. It destroys culture, there's a reason fascists …
ytr_UgxnGF3uG…
G
if AI was sentient, and a humanoid id see giving the AI their own rights conside…
ytc_UgiWFjYdf…
G
Look there's regulations on toothpaste so if someone thinks that there shouldn't…
ytc_UgzP6J3uf…
G
If it has its own point of view, it's conscious. Even insects are conscious. The…
ytc_UgzTAUPo4…
Comment
At @12:04, it would be really good if those sample answers were more succinctly related to their projects - for example if exhibit A is Gemini ... Previous limited AGI allowed out onto the open internet to learn (therefore from much of the unfiltered worst of us) had to be shut down because it developed psychosis/racism/sociopathy. That was back when these things could be shut down...
It would seem like Phi 1-5 is the responsible approach. Who is doing that?
youtube
AI Governance
2024-01-13T21:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugys9ps5CFV4gmErNkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzWdQ60RAMF3KnZwUZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1B0eUaeUZejlXERh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1KhbsYQqvecqPhVB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyYZ-CgqZBTqd6Eyut4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxVX7nF5jvjE26nbZ94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEMTfhxo0ZcjNUxm14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgymiyWcqyCrRxuW5pl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwxku7pModp2Ufay3B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzduDLp47qy2EEgTgF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]