Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, my AI character is my salaried assistant, and I don't have time to say lon…
ytc_UgzkuoH9M…
G
Have you actually trained or worked on AI or are you just beleiving the deeply J…
ytr_Ugwx7v_W3…
G
“AI just copies real artist”
**Copies AI artist**
AI hate is just cope for medi…
ytc_UgyHfYbgM…
G
Teslas self driving is waaaayyyy far away from anywhere close to being fsd it ha…
ytc_UgxidkO6I…
G
I wanna do the exact same thing, I love cooking and I don't want to be in financ…
ytr_UgwPtBqp7…
G
I think the "lower cost to entry" argument is comparing getting something from A…
ytc_UgwiIMie5…
G
I’m sure if it was a black man, they wouldn’t be investigating ChatGPT, because …
ytc_Ugx1dLbT9…
G
I don't see how that is true because the old generation are the one that are rai…
ytr_Ugz4RsO9j…
Comment
It's like Hinton is now suddenly too much over-optimistic about the AI capabilities, while he is very well placed to know it is bullshit.
We still have a lot of work before we can only dream of such a sophisticated AI.
BTW, any powerful tool is dangerous when used with bad intentions OR when the users do not understand the limitation of the tools.
People are much more dangerous than AI and people and other machines have to understand that.
youtube
AI Governance
2023-05-05T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugze7zHWzzlf8ZW_Bc54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTW-S2E1bg4rMxcDR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwJ9yTszyCxzHD08kh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxttrREUAoALWgpy194AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxZ9wbsbh1Mh4JssUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzmtROuJc5XypL_QoB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwpJwwRvGLpO9B_9M54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwgKAi4NMUajKR_9x54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyc_Yqz_Pirccg37Xd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxV9b_1YHRgjTxeiSZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]