Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why is everyone choosing the number 1
number 1 is real
the 2 is ai
besause the …
ytc_UgxfLH5lz…
G
We appreciate your engagement with AI topics! While the concept of AI advancing …
ytr_UgxFZH36U…
G
I like that you shed light on the malicious practices ppl use ai for but when yo…
ytr_Ugwa9Nc92…
G
id say he would have a point, if he had taken a picture, or drawn a sketch or so…
ytc_UgwOQGgdK…
G
The need to clearify BBC - news! (not that other BBC stuff) at 21:52 tells me wh…
ytc_UgzWY9MjT…
G
AI killing off the human race in 2 years? We don’t think so there are people wh…
ytc_Ugy59CCik…
G
What a bunch of clueless sycophants in the comments 😂 This conversation is just …
ytc_UgykpDuQl…
G
I work with a client who uses ChatGPT like this guy did. They feel like they’re …
ytc_UgxwGHUwS…
Comment
So very talented rhetorician/logician convinced himself of a boogeyman and how uses his talents to promote the idea. Kinda like a genius level chicken little. Did he read Acclerando or The Quantum Thief and get scared? Listen to Wolfram and don't clutch your pearls. AI alarmism must be very useful in the eyes of AI corporation publicists.
Now, if Eliezer is deeply concerned with the existence and status of sapient, caring life, that makes sense. We should be fighting against the way corporations shape human existence on the same moral and ethical basis. I'm unconvinced humanity will be terrorized by datacenters. And if It was, it would be fairly simple to deny them power and network.
youtube
AI Governance
2024-12-11T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzyw7P6UIG7qr9orm94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5qfO2p5ouopqxF9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw5jx3JN_iJjVdgF-V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgabcdIuRhNkDAGoZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzK0cxdklJv4XjEKQV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwk38JoiF5nupttEiV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUpWrqOtfeJUqbHoB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9Yn37_qtH16HPxL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzRiCvRXTjY9wSaOpB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrxwC9GQeGPZSOxHV4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]