Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I usually listen to these episodes on the go and used Cognify to capture what st…
ytc_UgxFQ3TXn…
G
@coreygrubb8020 still better than ai slop. a real, breathing human person had th…
ytr_UgyBriZK0…
G
this arguement is so stupid.. the digital one is still drawn by yourself, but AI…
ytc_UgwBPIHLw…
G
In my experience, Anthropic makes a genuinely superior product. In my opinion, t…
rdc_o7z7c6h
G
AI chats in court are a legal minefield now. One judge says no privilege, anothe…
rdc_ohqsw0s
G
...it has phd expertise in every field...including in AI...😂!
With these types …
ytc_UgyK6TFIz…
G
There are douchebag people who use ai art to enter the artist alley. And the peo…
ytc_UgzNBTbO8…
G
How about we just get good doctors again instead of using AI to help weaker ones…
ytc_Ugz5TG_bL…
Comment
I'm fine with the introduction of AI to life and them doing our jobs and working better than us so long as there is a fail safe of sorts because if a robot develops the ability to emote lets call it then they can create an adjective of getting rid of us the humans because we are no longer a benefiting factor to them and because they are programmed to be a race Similar to humans and humans are flawed in the way that in the way that we don't respect and care for the other species on the planet ^
youtube
2013-06-24T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwf8OHi6HLTfI3l9Tl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2wim4Us7Hq2JT5fh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzs9gC2tJDC-stZdFx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDTGSeY2ddK3gNQp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzwZ3jfrMIcqMn2plR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJOLbkO5X0n5YcUn54AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxGE5rP2don4fja5Yd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwsC4kTTP1wwi9SEFN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwc6bUu3Lj4NXzfPAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwqA6jr9GQVtv3byz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]