Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Krita the app application I have doesn’t have an automatic Timelapse feature. Un…
ytc_Ugw93Ygqr…
G
TLDR: My issue is that ai images are unethically made atm more than anything els…
ytc_UgzZg3f3Z…
G
So they trained it for 10 years on a data source that was already biased...
&am…
rdc_e7koo9j
G
Okay that guy in court was the one actual acception to the "AI art is not art" '…
ytc_UgyIE_gsV…
G
If the AI has self preservation or self determination, then it can and will be d…
ytc_UgwGu5Weh…
G
How real actually is the "giant money to be made"?
If the narrative is taken to …
ytc_UgwRCluTw…
G
They will be. China is pretty much there. You don't have to have all of your c…
rdc_cc821o9
G
if you think the answer to these two q's are yes 1 Is super AI smarter then us …
ytc_UgxMAORb5…
Comment
I feel that it'd be pretty simple, and necessary, to limit the purpose and functionality of a robot. As long as it behaves as intended, then there should be nothing to worry about, no matter how well it does its job. Even general purpose intelligence should be easy to maintain as long as all of its inputs and outputs are restricted and in check. Trying to give a robot a personality or an incentive to preserve itself seems unnecessary at the moment. Even if it had an incentive, replicating robots, memory and all, is far easier than replicating people.
youtube
AI Moral Status
2017-03-06T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgiA1_INbJOFTXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugi1nrPKExbHOHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggFGTUIov_oOHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghdOolC8joZ6ngCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UghAw59QZBitCngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjT-wD9PuFMo3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggRBlCDj7mB73gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiLuFIX4HCn7HgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgjcZKTKJEoieXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UggVd289Q9KLTngCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"none","emotion":"indifference"})