Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I believe in AI but I don’t believe in this guy only mimicking and copying other…
ytc_UgyedFbYk…
G
What happened for me
Me: Rule number 1, only respond with 1 word
Rule number 2,…
ytc_UgwhXpMeC…
G
It’s about AI not being regulated. A solution to that problem is for artists to …
ytc_Ugy5DTV5b…
G
Humam has point of view, and ai learns on humans point of view on music. It's ju…
ytc_Ugw1JxALA…
G
AI contradicts itself. I've seen videos where it sacrifices itself but then does…
ytc_UgzGWk1As…
G
There was a reason why AI was not mass advanced even tho it has existed since 19…
ytc_UgzKtsnbL…
G
AI is and will be weaponized against us
We live in such a f’d up time man…
ytc_Ugx1ZQX7-…
G
@eryalmario5299 That would be awesome! Yeah, tasks humans can’t/don’t want to do…
ytr_Ugza3670B…
Comment
People fail to realize that humankind has the capacity to create things which are almost or equivalent to our own features. For example, we will never be able to create a robot which has the ability to learn better than we can, because there is no basis or standard which we can create that robot on. We know nothing which has that capacity in the universe, and thus, we have no way of replicating it. Progress is inevitable, such as evolution.
youtube
2013-06-24T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwf8OHi6HLTfI3l9Tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2wim4Us7Hq2JT5fh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzs9gC2tJDC-stZdFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDTGSeY2ddK3gNQp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwZ3jfrMIcqMn2plR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJOLbkO5X0n5YcUn54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGE5rP2don4fja5Yd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsC4kTTP1wwi9SEFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc6bUu3Lj4NXzfPAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqA6jr9GQVtv3byz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]