Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ThatWeirdGuy43 I do understand that and honestly i agree on this. It is just te…
ytr_UgyuP7lCx…
G
For me I don't see any difference it's just a robot it doesn't matter too in my …
ytc_Ugw2M-xX3…
G
0:07 Do you have to artificially teach AI not to hate Jews? Is that your fear? 🤔…
ytc_UgyE8aO_8…
G
+flamegod7 thats the type of attitude that will lead to the robot uprising again…
ytr_Uggs2YgYg…
G
Oh man. Ya’ll do realize AI is watching this, right?
*I don’t know* though… mayb…
ytc_UgxmLpVDO…
G
5:14 if you get a chance listen to the interviews with the parents and the lawye…
ytc_UgzqjyJAn…
G
I don’t mind machines doing menial tasks for humans.
The down fall of humanity c…
ytc_UgxgBUrTp…
G
To say that AI has consciousness is like saying a tree has intelligence. We have…
ytc_UgzbJsoR5…
Comment
I don't think we have a choice with regards to progress. We are neurologically hardwired to grow used to our content situation, no matter for comfortable and luxurious. So in order to get a possible feeling (happiness) from our life, we have to IMPROVE our situation. To say in other words: Our happiness is proportional to CHANGE in our lot in life and not the absolute value. And we are litterately addicted to happiness via neurochemicals (endorphines, dopamine etc.). This is why we seek economic growth blindly, it's an addiction.
So if AI can become a viable way to improve our lives, we will do it. We can't resist. So this philosophical debate is interesting in theory, but it is irrelavent in practice. The more interesting issue is whether or not we can control the AI we create. See this video (and the ones leading up to it):
"Deadly Truth of General AI? - Computerphile"
youtube
2017-01-24T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxQw7YfMcOhg6zyCbR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZKKVQOOweXnuzyGR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxMvnr5ixkjJGjQTgp4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1DPqIDMxmnKwbdc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQi1gAtvrINJhUugx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzA4QfdzSS2WxK1u6l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3r5ONYiHca-oWIdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghmHBsOLD4fY3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_UghA24C7Vxvn43gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjnnBXlqmRuLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}
]