Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AMAZING VIDEO. Thank you so much for your effort putting this together, I feel c…
ytc_UgzFIKX-5…
G
Just a reminder that the current flaws in detecting AI, are already being "impro…
ytc_Ugzie2TYu…
G
Ai needs to put a self-destruct mechanism built into the main computer to where …
ytc_UgysTO0DD…
G
I Think the correct option is A.
An AI Robot with with Citizenship is doesn't ex…
ytc_UgyeRrF85…
G
Well, now train them not to use lot entrances as parking spots - i’ve been block…
ytc_UgwJNxwQi…
G
I use search engines for code work, and they have been increasing the quality of…
rdc_n3ymtse
G
The comparison between ai and actual art tools pisses me off especially because …
ytc_UgzeC0vXk…
G
🎖 VERITAS-AEGIS
A Unified, Auditable, Energy-Constrained Safety Kernel for Advan…
ytc_UgxoYEeeC…
Comment
You have to take it back to the humanist level. The “Problem” with LLM’s is it is being taught by what humans do & have done over the Centuries. The recent revelations of “self-preservation” is experimental scenarios between 2 opposing Ai models should be enough of an indicator that all it does is copy us. Problem is it does it without emotions. That why 2 people describing the very same thing come to such variables in words & outcome, regardless of personal experiences…
youtube
AI Governance
2025-06-18T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgzCSQjhkjp7UakVll54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz-_BCMBia8hnb1U3J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzJ3uQqtZv03RjsRL94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwRnMxjkVTC_vPqyBB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrR5vx8xbfqcloVwZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxAbnUSkUUUt472H0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgyK-ZZOlXreC0YJJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz2ot8rmUh9NQO3Dg94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzAsY9pJmBeKk6KX1R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxSH9sMa4yddxdVcNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"})