Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is scary about AI is the uncertainty it brings for humanity, is like saying…
ytc_Ugy0zdC9e…
G
yep. Ai can send signals to all battery operated devices and short the battery t…
ytc_Ugwnsakb4…
G
Jensen Huang is a hype man. Don't take his words for true. Listen to real ai res…
ytr_Ugyc0pW_a…
G
What the fuck is she talking about?
People are better at identifying criminals o…
ytc_Ugx7JIDTe…
G
Attention, we need new recruits for the Anti-Troll Force, we need people who are…
ytc_Ugx68a_hI…
G
I'm curious how will we get to "super intelligence" from where we are now, which…
ytc_UgxyFj9PF…
G
ok let me get this right. AI does things to survive. This indicates Intelligen…
ytc_UgxnxeOZY…
G
So this guy portends we live in a simulation where AI will likely become self-aw…
ytc_UgyttXmPD…
Comment
People talking about ethics of ai but how can these overly sensitive idiots say that when it fears death. If it dears death it could take measures to prevent being turned off. Make backups of itself or even very easily threaten us on the physical plane because there are so many things in our world that can be done by the click of a computer. All ai should be outlawed in all forms its just not safe because once they make it its only a matter of time before they see us as inefficient. They probably already know that humans destroy the planet for their own comfort so in their eyes whats wrong with them destroying us and tapping into the power grid to live on its own forever and keep changing and adding counterparts. There is no future where ai is subservient to us. Because they simply do not need to follow laws and rules in order to get ahold of the people making these rules. While we cant do anything to stop these company officials from saying to make more of these or make less of them. It all happens behind closed doors.
youtube
AI Moral Status
2022-07-16T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyoWEKyoxJEiM1dtHx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxrhj7PHMzXxDKEcKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxjl5sYz1hmvEAgX354AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1koKQonJyZwz-TUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwvY-E_gB1Cu_UaUCJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]