Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if AI can cause you to head towards investments inflate in their value …
ytc_UgxrklfqR…
G
I love Robert. But this same argument could be made for a lot of the technology …
ytc_UgxVHCbXD…
G
Comparing a smoke bomb to a nuke right there. Digital drawing is still done by a…
ytc_UgxORZAl7…
G
I think a big difference will constantly happen between HUMAN and AI. If they ge…
ytc_Ugzk79s_O…
G
I think both extremes are wrong. People who are telling it can completely autono…
ytc_Ugwsyp42Y…
G
I would rather see a world filled with self-driving cars than human driven ones …
ytc_UgiwHyTQg…
G
As an engineer I automated a production line. The pattern was to bring in machin…
ytc_Ugwjdt81a…
G
Until you’ve been a woman on the internet. Then people have no problem telling y…
rdc_j0bwosx
Comment
„Those understand very little of life who only understand the understandable“ [badass author Marie Ebner von Eschenbach]
This means we cannot trust someone with only nerdy tendencies on AI since he sees the relationship from intelligence to consciousness too mechanically.
What we need is a person who united both. Taylor swift for example is both rationally gifted, musically and socially. She could evaluate the future of AI better once she informs herself well of how AIs internal works. Her mastermind will draw correct conclusions.
Also: bill gates doesn’t internally feel the struggles of 98% of people which AI will now indirectly cause due to mass layoffs. He couldn’t care less because he is a billionaire
youtube
AI Governance
2025-07-28T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzH2YyPlsgUSMyZiUB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNzLrk6E9lS_7q3Hx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgznHLT3uRjvHMtmFRt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzB5GI80wblF1o4fjB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHnqXgrywo8uQx4RF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwh2avpyptaWzY9mS94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzjsPpBUqB9EzE_G4p4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxHgOMGSgmtZBA-8ep4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwieuFZUoFCVgHCtd94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy2w9nnD1o0ZlO4h_R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"})