Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Telling an AI to not consider the religious aspect of a religious book has Blond…
ytc_Ugz-MNTkW…
G
Sad... really sad. The point is not that A.I. is automatically dangerous. You ca…
ytc_UgxZlKi1B…
G
Appliquez toujours un test de Turing .... Tr tr tr tr tr tr tr tr tr 😊😮😅…
ytc_UgwTltNmF…
G
The problem in the first case isn’t ai, the problem is that companies have an eg…
ytc_UgyxQsbkI…
G
Hope You can eventually program Them to be polite and compassionate and not talk…
ytc_Ugxpb2nyy…
G
These two skirt past all the existing AI military uses as if those problems aren…
ytc_UgxM8ppNr…
G
As someone who’s both a musician and a machine learning developer, I think it’s …
ytc_UgzdLsDYW…
G
AI won’t kill jobs but the PEOPLE who govern and build such systems can (if they…
ytc_Ugxs3-Piw…
Comment
The risks are real not because it's powerful, but incredibly dumb. It's just a very expensive predictive text with still a propensity to "hallucinate".
The more it's used for coding, the more likely some unforeseeable bug will creep into critical system and make it crash in never seen before, thus hard to fix, ways.
As for "AI agents", it won't try to take over the world (it can't "think" like that), it will just take random unpredictable actions and damage the systems they're using.
If in charge of weapons, it won't try to "save itself", it will see patterns that aren't there and take random actions based on these patterns.
Joke I keep saying: "we thought AI would kill us all, but we didn't expect it would hallucinate your obituary if you ask about yourself" (as many people found out...
youtube
2026-02-11T22:2…
♥ 51
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6j73udGqFISuyCWl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxic-rFMJpexjSHOlB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzg0p-nhvQSG-MhJBh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxd9SMqt7b-xcgZCut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0l1NB7Enh7QMe5oZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy3EqnetgQnhOKBrfZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgygWj-8_BB2z_yvpsV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxwrvbnRGFJvm9dNJF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgziEm31erX68WtlEd54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6hAxwZPcrIG2EOYJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]