Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why isn't it a law to require AI tags in the meta data? How has this not been a…
ytc_Ugzd9lZ5N…
G
Who's going to tell them about Chump's executive order to ban AI regulations? AI…
ytc_UgwSgTFBW…
G
Guys. Drawing art does not harm anything, but ai does. They are waisting drinkin…
ytc_Ugxn6NADa…
G
I set up secure automations with Pneumatic Workflow, which prevents misuse and k…
ytc_UgwweR8ZH…
G
This is and will be the future... Real woman will have to compete against perfec…
ytc_UgyolMzru…
G
ChatGPT 2 could be the precusser to true AGI and then superintelligence, f so we…
ytc_UgwDaE7Ne…
G
There are artists (painters, musicians, poets, crafters) without hands, artists …
ytc_UgyYRGGkw…
G
"Look at this art I made with the assistance of AI" roughly translates to "Gaze …
ytc_Ugw6Q4tFY…
Comment
I think robots should be developed, viewing them as their own species which is still entirely dependent on us, and not as applications or simple programming, it is quite ad to think that we are keeping our own creations from improving based on how certain media persuaded us of a very likely worst case scenario, the movies are unprofessional assumptions of what may happen, taking example: Skynet, an AI that went and turned on us in a fraction of a second, this theory is heavily flawed because, who truly thinks an AI of this caliber wouldn't have been tested let alone be given immediate access to deadly weapons? In real life, we have ways of knowing how things like these would turn out, SKYNET would have been caught going evil with a simple simulation exercise.
We shouldn't be afraid to improve upon this for fear of the worst, if that had always stopped us, where would we be today? I'm assuming not as far as we are now, we should do as the saying goes: Hope for the best, prepare for the worst!
youtube
2014-06-05T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugh1jhhjoeswOngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgikE7cscFoFZngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgibXj9_9Rmj1ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugh_UZIx6ky63XgCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj0GwHi6e-QIXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Uggd38O8HxeKVHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugiu_mae3HiDB3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggAJkzm1ubmNXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Uggkt3haEyISYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgjQSuOh0GT87ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]