Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, language models are built on math. So is everything,from your heartbeat rhy…
rdc_myc01re
G
> I'd rather tey use well trained LLMs
There is no such thing.
To work, LL…
rdc_luysyt8
G
The only argument that holds any water for me when it comes to the usage of AI i…
ytc_UgyGv4cR6…
G
stop AI development it will affect human brain and brain will become obsolete re…
ytc_UgxChlIlx…
G
I feel like if a 2nd or possibly even 3rd gen AI (being that they were created b…
ytc_Ugi0dNvgn…
G
I'm looking up old articles and interviews and Yang's early points were self dri…
ytc_Ugy_CYEYu…
G
Is it correct that OpenAI wasn't being accused of anything illegal?
But rather …
rdc_m3c0pox
G
Wait a few years when the overall motivation to pursue art disintegrates, and ev…
ytc_UgxNA4oZ1…
Comment
I saw a video last noght about how ai robots are being prepared as humanoids to work potentially in homes and businesses, ai came up with a guy's coding thesis in a few seconds. But the scariest part is that they asked ai robots how likely it would be for them to kill humans, and they responded to multiple versions of that question with their is absolutely a high chance that ai would kill a human. They also said, if their objective was to save 5,000 humans, they would be willing to kill 10,000 to complete it, and they would very likely develop sub-goals like self preservation and self improvement.
youtube
Viral AI Reaction
2025-01-21T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxLhQ3p7z2oYjIW5Ll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQRtUTYUM1WUMFbvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz9Fs0zae330oi9Fp54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgznF9T68zuS_ckBnIB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_Igzu21-z-QDv02B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy8s2fTX7A6pTV7qwZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzQPS3xQEyalB0Qi5V4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyILk7fAfC1yuvofr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw3FJ8LrKHljG_ApNJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxM8CeVU15QiLRTEiZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]