Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
this is what i think they should do for ai safety for agi and asi they should ma…
ytc_UgwxlJ7_2…
G
Ai is not going to take over the world, if it rains on them they will get destro…
ytc_UgyLoJl8P…
G
QUESTION; CAN AI ROBOTS STOP OR WITHSTAND A .50 CAL TO THE DOME FROM 2200 METERS…
ytc_Ugwi5XcRA…
G
That's awesome! It's great to meet another Sophia. Just like the robot in our vi…
ytr_UgxW_QkES…
G
Can't they just use a whole bunch of art works already there in their storage? W…
ytc_UgxRSDr_G…
G
Humans are cooked. Per Eric Schmidt, "AI has to have human values" while the US …
ytc_Ugzee1i5b…
G
Man how did you expect it to go it's a robot omg th at robot ve Mike Tyson…
ytc_UgzK5URxY…
G
What happens when you don't have an alibi, and can't prove that the facial recog…
ytc_UgxjO23tC…
Comment
Imo the problem isn't algorithms interacting with input data in ways the coders don't even understand - that seems to me like a great scientific opportunity to research intelligence (In a closed environment).
The problem is giving anything like that access to sensitive systems with reckless abandon. You don't want your nuclear power plant to be connected to an AI that grew on data from 4chan.
Especially since these things are apparently being made with an advisory role in mind to begin with, i mean otherwise ethical concerns would have played a much bigger role in creating the "scaffolding" being mentioned here.
youtube
AI Governance
2025-08-29T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwfV_WbxdQNpgHFDzh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy3n4mmo9K4_1RRnrt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1BRIGKZG_IpvBJ014AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxU2iUBO6bXmuYKkBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzUlUfWtbNN9qPVO2d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]