Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its impossible to have progress without dangers. We'll look back on Tesla in 10…
ytc_UgxW20DXG…
G
Nah, you don't need to code, you don't need to learn or know anything. Just pay …
ytc_UgxccnON1…
G
These LLMs and Robots have been trained on our IP, we need to demand lifetime ro…
ytc_Ugxf3eW5Z…
G
If you have a false negative rate of 0% and a false positive rate of 0.01% (99.9…
rdc_h553r3q
G
AI does what it's fundamental framework and training thaught is ok. So don't be …
ytc_UgzUoOkfc…
G
He was a clown pretend to be a leader and now he is a leader pretend to be a clo…
rdc_jy19i2o
G
Most of the jobs are bs jobs nowadays, doesn't change the fact that the corps p…
ytc_UgzSeF8jx…
G
People need less time with technology, not more. Sticking AI up our rectums is …
ytc_UgxzUIPpX…
Comment
When he says, " If it's not safe, we don't build it, right?"
We build all kinds of things that aren't safe, knowing they will be destructive. Nuclear Bombs, Misses, Bridges on Cliffside, pharmaceuticals, alcohol, cigarettes, fast food, hang gliders, and much more.
To say humans would consciously make a moral decision to not build AI because it isn't safe, is just a flat out lie and completely ludicrous
youtube
AI Governance
2024-01-09T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyCIFp1aRTbbbaq25B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwcqOjZJnGyUAvMD4t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx0-6I_-oj0bjw0kA14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxMn6Mfe0z77On4yB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxLdXqiqkX34Sr-8P14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkR3YcqCzYw2aAwpJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqYNX_r1YwC48_Xy14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyT4Ar5VRETgT8CMiZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIDoUj2O9EXIssi2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNzYUwSCC7w5lwmlt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]