Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sora 2 is a trainingground for AI to understand the world around it. Creating a …
ytc_Ugx2mNarW…
G
We need a new Geneva Convention banning anything that can strike a target withou…
ytc_Ugw-IiNC1…
G
When people clamor for AI to work 100% of the time, they are unwittingly asking …
ytc_UgyIIi7dN…
G
This some " I Robot" sh*t like that movie with Will Smith. If they making robots…
ytc_UgxbVAbol…
G
do not let it has actual feelings. not mentioning AI, even if my phone has feeli…
ytc_UgwDN7IDg…
G
The psychopath a head of open AI doesn’t care about anything you to have talked …
ytc_UgzxCtwIz…
G
Ai “artists” are not creators. They are just a customer. That’s like saying my m…
ytc_UgyKragGP…
G
Bunch of horseradish, we still need humans to observe and evaluate the AI. You c…
ytc_UgzKbYXZH…
Comment
..what for is any controlling agency, while no agency can keep up with the exponential evolution of such exponential superpower of unknown?
Once it is here there is no one who can stop it, the application of AI it self broke several laws already and just because of its scale nothing stops it to break another rules on the go.
As long as it will be here and will evolve there will be somebody who willingly or not will cause enormous problem thanks to this superpower which will blast like nuclear explosion and there will simply be nobody to counter act fast enough.
"Ask people in Hiroshima" how they could counter act to what happened..... something like that will most likely happen and it might be a guy in a shed who will accidentally make a "black hole" in the system, or a company who will make decisions based on the model predictions... it will just explode and there will be no chance to stop it.
youtube
AI Governance
2023-06-07T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyHr_O5jJN-U2cmbpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNou8Bwlfsuoxkl6p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRdyROTTe7XtOvO-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzxskjiaRJFniuzh3J4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz91Nn1qvwh2eHza0R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzNlXe5VBNwe0wRn7Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwW5IWnYUdPynz3haF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzzehhvFtDb3q-9S4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyu7txrlqS7VbO-tbd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbbpOLixJoy1cz_dl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"none","emotion":"outrage"}]