Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eliezer Yudkowsky does not really understand what he is talking about, because h…
ytc_UgyMkp_eH…
G
If we continue to treat AI as just a tool, denying it agency, autonomy, or even …
ytc_UgzEYF6P8…
G
I once experienced with a good friend real telepathy...
a talk with would have t…
ytc_Ugz0bSlj6…
G
NOT AI yet it's companies offshoring roles ever since covid. 90% of jobs on mark…
ytc_UgxMRU4i7…
G
We value diverse perspectives and aim to foster an inclusive community on AITube…
ytr_UgxZE4_xU…
G
Technically, you a were misleading and deceitful, I mean no disrespect. This is …
ytc_UgyYihE6F…
G
Gross. Guess I'm no longer gonna do that kinda RP. I thought it was a lot more i…
ytc_UgxIv2oGu…
G
I find it hard to have sympathy when the car alerted the guy 19 times to take ov…
ytc_UgxYAX32t…
Comment
I dont think AI in itself is dangerous, its just that people (specially those in power) have fetishized techonology so much they are eager to use it as a solution to anything even when it is not ready or doesnt even exist, another example of this would be Carbon Capture and Storage™Ⓡ
youtube
AI Jobs
2023-05-04T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzjN2roO8KWqdC7bb14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzF5SeVhNkPwLyZP4R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzY0L6vrJPtpNjMUKl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9_4L0jWhlqOHOgvp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxT8ZkCgd-yMXdN0WR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz-1ROSdi9rRYDbsgt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzp_dAxdQ164GY3BXl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXPL2FH936WVc01a14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugza7jH6-zIoGMUlIzJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzUl2oGDYgem05B8d94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}
]