Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
These Phsychoathic murderous trillionaires should not have ANY POWER over the pe…
ytc_UgxKlAMsW…
G
Has he see what AI says about what he has done with Trump? It is honest I will g…
ytc_Ugy2jkmxm…
G
I just don’t understand how if America slows down or henders ai development how …
ytc_UgxtWtYJd…
G
he said chatgpt did all the work so dude was just putting prompts into gpt all d…
ytr_Ugz2JkSna…
G
i have only used ai image generation back when it was terrible as an art referen…
ytc_UgzE4BRxB…
G
A robot with a gun...humanity is dead
It's the movie 🎬 i- robot real time…
ytc_UgzcZfw4m…
G
Oh well......AI?
Is the car looking for a parking spot?
Sooooooo glad I'm old & …
ytc_Ugw5MbCX6…
G
Only thing I can suggest is that you could get a few grow lamps and do herbs? Fl…
rdc_eh57au1
Comment
Listening to many podcasts on AI is forcing to believe computer scientists need some social science education. AI will be as powerful as human wants it to be. They are under accounting for human ability to resist unwanted path. Extent they will be powerful will be determined by many factors: policies, communities, social system, etc. For instance, despite advances in auto- pilot for planes, there have been no attempt to completely replace pilots with machine or automation. It is technological possible, but they know it will be resisted and the market will disagree. It will be same for many of the AI systems
youtube
AI Governance
2025-12-05T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUxRpcW3m4Oa8MQOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNgHvetyJmP3wNpPp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyMDC8m8jDdWEVovjR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz4lg1qUiS4XAVXo-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxRG72du5S7mL9FC2B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQVBtuL3R9eNXG3yt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyq8p95FKR0z5RJl5d4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxnrwHG1jZaYPrmbth4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdTgHylS6wkMQUOsd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyK9fPBALTcFds3HAR4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]