Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The main issue that was not asked is cost. This isn’t a case of a $30 subscripti…
ytc_UgxiQYfjl…
G
I hate how the conversation has mostly been around TikTok when it comes to algor…
ytc_Ugwh_JPtP…
G
I mean no need to be mean to a random guy just cuz the pictures he posts are ai…
ytc_UgyWvEfRT…
G
The problem is that people blindly accept AI to be right and don't question it. …
ytc_Ugx8Ibu7R…
G
@alex.ski.33 Good point!
Per that line of thought, AI productive work will requi…
ytr_UgyQeBNUq…
G
Ethics is key in AI. Any decision that it makes can have huge consequences. We'r…
ytc_UgxJOTifF…
G
I want to see that person's novel and find out if it actually progresses, becaus…
ytc_UgxcGM0eX…
G
Mostly just snake oil salesmen. He’s also calling humans as programmers when AI …
ytc_UgwISvTbu…
Comment
Thanks Jack for this interesting interview. We see and feel a global rising of human consciousness rapidly the last 1-2 years as well and this is an important factor. And rising consciousness can reduce risk to minimal. So instead of being optimistic is important to see all factors.
AI is a topic that brings us into deeper questions about consciousness and about why and how we exist here.
We’d like to offer a question for reflection—to you Jack, to Roman, and to everyone watching and commenting:
If AI safety doesn’t scale with intelligence, how do we talk about real limits without removing human agency—so the response becomes responsibility, not panic or “there’s nothing we can do”?
youtube
AI Governance
2025-12-31T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyxayqWBJoj-gZVCdl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzLZPDKHhTvGhgdgrl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw3cKri7LPpfZCYJYZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2ljrrdGIM-8zaAH94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwpDyMOgdQgsVDFjR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGTyJyXTpZEiKAuVd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylF_2IjH_5SUQOBll4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwzSCboCozLEtOBH_54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw6-zwmc48Y9-1gKe54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzy4D6IwJMNdP_DozB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}
]