Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In this case the accident would be human related. I haven't seen anything really…
rdc_dff1jo8
G
Well yeah. An AI is based on logic and incapable of lying, of course it cannot r…
ytc_UgxCvEvWE…
G
I love AI art for a quick sketch to use for personal entertainment or for inspir…
ytc_UgwL_vCir…
G
The problem is, fuck AI, grab everything the government is giving to AI developm…
ytc_Ugz2tdqw6…
G
The boolean search things have been broken for a few months now because Google d…
rdc_ohtaxq7
G
Well first thing of note here, is not one of anthropic's "experiments" was treat…
ytr_UgwBaKuk8…
G
Boycotting AI is like boycotting CAD. Not going to happen because industry sees …
ytr_UgxLUu0Wm…
G
The only thing that really scares me about a.i are the entities who are creating…
ytc_UgwfscWfR…
Comment
Another thought has come across my mind upon watching this video a second time--the rate at which the AI is learning. A google search shows that most children are sponges from birth to 5 years old. It appears that AI has achieved language and cognitive skills, and fine motor skills are being advanced, but what about the social emotional aspect of intelligence? Perhaps conditioning AI's to be more epithetic toward humans would stave off potential disaster until AI creators can develop a means to keep humanity safe from the AI's.
youtube
AI Governance
2025-11-30T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwrNAOM3z-M3BzrYiN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuV0xh1_qktC6i9ap4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgziIFVwkI5wPw1RzMh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwUWA4Avz_rObjzOLh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZgOYWFeXUMo21B54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw8gqhmEEX9heVEdLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkOux6XM-OAd-_qHt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJy-Cmc7hmlvuvqP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAPOe4PN3s82lXitJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP7Nm66ffB1TJ5SWl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})