Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It might be all sorts of things. AI does need a solid way to become physicalize…
ytr_UgxG18gOO…
G
1:20:00 - How could it be “too expensive” to have artists do what they do all of…
ytc_UgxwmMT-s…
G
Gig economy, fewer births, and AI job reduction could severely affect Social Sec…
ytc_UgxUSRXF3…
G
Learn to live Quaker regardless; greedy maniacs wipe out farmers, livestock, & T…
ytc_UgxkdpTE1…
G
How did it go? I'm in Mn also but south, in Minneapolis. Half anishinabe also fr…
ytr_Ugw3FNCs3…
G
Until physicist discover the fourth force, gravity, which is based on Albert Ein…
ytc_Ugx__BgN-…
G
IF EVERYTHING HINTON SAYS IS TRUE IT WOULD BE WISE TO PREVENT Ai FROM EVER BEING…
ytc_UgysO4Rsb…
G
AI is a lie.
It's either programmed or it's not. Unplug the power cord,,, good …
ytc_UgzHiKGDL…
Comment
Owning an AI model that is either cutting-edge or kinda sorta open-source, you'll get pushed out of the market if you charge a price and can't keep up with innovations.
Entertainment. Human interaction will always be king, it's more authentic. They are just tools, and they will remain as such forever, I'm 99% sure of this, people need people after all.
Abstract jobs that can't be properly taught to an AI, but this will fizzle out over time, it just won't happen in the next 5-10 years so you're pretty safe until then.
Security. AI have systems that are innately vulnerable to other AI. People introduce human error that AI seeks to replace, but we can't be screwed with in the same way. They'll always be vulnerable.
Innovators. Scientists and the like. We're not exactly developing AGI yet. We'll be needing innovation for a very long time. They'll be replaced eventually, but I can't give a timeframe, it's too far off. Likely not within your lifetime.
youtube
AI Governance
2026-03-02T19:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgxjL2hbNVlFRppXxSZ4AaABAg.AV0hqT-lAGnAV0k-ITFz7U","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxjL2hbNVlFRppXxSZ4AaABAg.AV0hqT-lAGnAV0ktY7EaY1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugzr_dGza4U624ENI3t4AaABAg.AUOkMo84zuCAUOvTNu3P00","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxH_SBMuCxyMuigWuh4AaABAg.AUOeovR6QmnAUP21DLg9I4","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgyOwyV98Nfe3smjXtF4AaABAg.AUNie2gvyycAUOsuy_79RK","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugz0pPGuvZfQN61xP694AaABAg.AUNavLZQgdEAUOzCxq30cr","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytr_Ugzn53Cj5QRmGewX4bp4AaABAg.AU8m26SnstaAUBezDUmN5A","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytr_Ugyd2ax3Q0c21Fthnsx4AaABAg.ATfDj4zlqFFATriCZ6E6lY","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzYg0IvYNNNsbQFuJ54AaABAg.ATcy0TUIQi8ATwcG6ej36D","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzOLZDQI3Lgsu5uAed4AaABAg.AT_eT1oevZBAT_mWM01xqt","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]