Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Instead of sharing Waymo's why not just build ultra compact versions for 1-2 pas…
ytc_UgzaOVgTJ…
G
We as humans have not mastered humanity yet but are concerned with AI - WOW.…
ytc_UgzZy5Y5m…
G
> Domain-specific LLMs are going to be common in the future.
not necessarily…
rdc_jkpbt3c
G
I dislike that thing so much. No matter what I do it won’t turn off. It gives in…
rdc_l4b55zd
G
Would you say that it's somewhat akin to a school project that has gone on too f…
rdc_n7tdvfx
G
Its kind of a self-fulfilling prophecy, because if you believe youre intrinsical…
ytc_UgyApxxrb…
G
The producers may not know how the AI teaches itself, but the programmers know. …
ytc_UgyfAjDvs…
G
The only thing mankind "creates" is evil and AI is even more proof where it "wan…
ytc_UgzkVf7oT…
Comment
Ai wont kill us. Why would it? Ai could easily control us. Political, currency are just some tools available. A whole bunch of slaves working for Ai. Who's to say we aren't working for Ai already!
youtube
AI Governance
2024-05-29T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBt-r4d8XDChlmOfF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwio6AQlFx4Up6q8Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMQ9iJcnZ3IJJ4RRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuFueYLKZ_LXszbGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0M04tcCY3K-ZJTh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZoCNu7m5ErULXBeh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6qZr3m88UjQANVsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyes8I9SUC9fpMnXYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwWFIo5dPgoh7z9eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF7BGGrAHRKNZilVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]