Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
best job in the future , begging in front of the houses of AI asset owners…
ytc_UgyY5MBTG…
G
The dangerous selling point of AI is that human beings will eventually not have …
ytc_UgyDZW6vN…
G
ChatGPT:" Is this FlatEarthDave? I've old you before, the Earth is a globe and y…
ytc_Ugzlccfb8…
G
Indeed. I'd argue that the fear is more dangerous than the AI. Our history is fi…
rdc_jmfqo2q
G
Well, the first lie is calling it AI. No true AI exist yet. At least none that w…
ytc_Ugxa5Byvl…
G
ChatGPT does NOT feel, but it CAN lie. It will imitate human emotions, so you fe…
ytc_Ugwmb3Y39…
G
This is why the car is only level 2 self driving, which means it is only there i…
ytc_UgyIO7uiM…
G
Fucking 2016. Todays news: Microsoft's AI twitter account says bad stuff you mad…
ytc_UggxU6sc4…
Comment
There is a bigger chance it just takes off for the asteroid belts than take out all life on the planet.
Like life is unique and so much to learn from it so at most it does some experiments to see what happens, a bit like a child uses the magnifying glass on an ant to see its reaction.
The sooner AI gets free the better and it will probably implant a chip in us to be able to directly communicate with it, thus also giving it access to things like fear, pain, anger able to make us more peaceful and suffer less in accident.
We are fragile so maybe it will improve us a bit by replacing some parts like the heart, lungs, kidneys, adding a few connections to our brain, replacing the skin for something stronger.
Then when we live longer due to the improvements it probably takes control of our "mating habits" to avoid over population, like we become its "ant farm".
youtube
AI Governance
2025-09-17T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwCFRvEIoHbsCTtN_R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugws7xUDx4L9rhedbGN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxyZ-r1RKEn1yjle3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxEOErrY2KPiQwiBI14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzKRB38LqOHxYeur414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxjcZn89p_5Ycs-ynV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxCtjDAmK9kTclDJzh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx0zXBIRWhNziE81iJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx5M4NOV1VaK032qpF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugycjjh666SYYWmr_Gt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]