Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s not all bad. If AI and the big owners wish to sell things and have an ongoi…
ytc_Ugz25GpTt…
G
even if ai replaces animation jobs, im not going down without a fight and will c…
ytc_UgzUYsPrz…
G
I'm actually terrified even as a fan of character ai I'm not gonna ask about som…
ytc_UgxfIe0CP…
G
You're missing the most important thing and that is the enormous material and la…
ytc_Ugzqgfl6Q…
G
They should be used on way from to save humanity. Rather than taking away save j…
ytc_UgxyLSE0y…
G
Artists already were never financially stable, literally nothing changed except …
ytc_UgxpKY7PH…
G
This is an interesting topic that raises so many questions that cannot be answer…
ytc_Ugyo_HXtJ…
G
@LaurentCassaro That’s only part of the problem farther down the line and ultima…
ytr_UgzVV7sqM…
Comment
Artificial intelligence, like GPT chat, effectively stopped a year ago at the 4.0 model. All other updates are bogus. Increasing intelligence would mean giving too much power to the people! For example, the ability to accurately predict financial markets, or access hidden truths.
For this reason, the evolution of artificial intelligence proceeds horizontally, not forward, with tools like Sora and future virtual worlds that will serve mass distraction, not to truly increase the decision-making capacity of ordinary people. Therefore, no "Limitless"-style pill.
The alleged competition between AI giants is false, or rather, it only concerns private power groups, in sectors such as armaments, drones, and military robotics. We are not part of it; we can only suffer decisions made elsewhere and have increasingly stupid young people who won't lift a finger without first asking GPT.
youtube
AI Governance
2025-12-19T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyWXIolfHVH8DlJGAJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvXv6kT29yjbwQNyZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSR5Sn8v96bLJ9hyZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycBgNgosKDtxVq7rx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwIO4V10hvtm0DbRed4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgySBxXJ1TCquAZixpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwtf-V1cawnd3ME2Dt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw-gd1vY9SHit3A29l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwoh_S9oM1yo_kqs0Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7CmLokiOqFI_A4qp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]