Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Boy that's a lot of fear mongering. There's always gonna be asymmetry of knowled…
ytc_UgxT6Z229…
G
Bro I’ve been grinding art my WHOLE life and I suck at it, but ai is excepting d…
ytc_UgyFS8hgd…
G
I don't see the problem. As long as they openly admit it's AI, what's the proble…
ytc_UgyGUFWcV…
G
AI art is heir to the trash art that predominates in the world, mainly by "artis…
ytc_UgyprAqQQ…
G
😂 I'm enjoying every moment. USA. Here's the medical term CTD Nurses and docto…
ytc_UgzDtO1am…
G
So we have to be professional writers, storytellers, mentors, advisors to our cl…
ytc_Ugx7jiSpt…
G
Blaming ai for killing people is almost the same as blaming machinery for killin…
ytc_Ugz0EPINv…
G
If me, a human being, start learning from your art work and others, and start dr…
ytc_UgzKGY-Gy…
Comment
Chill out. O3 and anthropic trained the models on real data. The engineers fed those data and rules, the model predicts the most likely result and gives it. Nothing serious to be concerned about. AI hasn't reached a level where a conscious can be created or born out of nowhere. If Models somehow reached a level where it can clone a conscious and train itself then that's a problem if the data or conscious trained on has those traits that humans fear. Our hardware hasn't reached that level to clone an entire human level conscious, it's just the behaviors and most likely results expected are given by the current AI system. Stop creating fear among the non technical people. It's laughable when non tech people talk and mix emotions into things that doesn't even exist in reality yet.
youtube
AI Governance
2025-05-27T19:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugx20nnpjU2Zs9PX5qN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwKwpgPrjdrcJ8WAWV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw9ekfHukddGgixaEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyjk0J6vB2quJd5qwR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx6sTg4ccT9MeCu7st4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlzU9qcBy9zydDMPl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyzlyYQbxnSZJNy7CV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxPdVE-CizBQH-QIEh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDNJluIPxu6ILI6d14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwy9_BbfqB3bb-fwdl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}]