Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No one is immune to automation. The world will reach a stage where humans are ob…
ytc_Ugxm0t0ZI…
G
Bostrom's notes about synaps editing and downloadable knowledge and skills is AI…
ytc_UgybLro55…
G
This is a real problem in fields that are still white boys clubs like engineerin…
ytc_UgyNIjYy5…
G
I think the thing that I dislike the most about AI art is that when I want to fi…
ytc_Ugx2RsPyQ…
G
Using Ai is literally you poisoning communities to save some time on your work o…
ytc_UgzTZlJB6…
G
The worst part is I’ve seen Tik Tok creators that market “their AI art” and prom…
ytc_Ugz553vY0…
G
Right after i clicked on this shorts, youtube recommended me a search about "ai …
ytc_UgzEtOxRI…
G
Anyone tech literate who has used AI will tell you this. It’s the seven figure e…
ytc_UgyYwRTLW…
Comment
Quite a few challenges to this view from the assistant professor. There are a multitude of jobs which aren’t technical or science based, jobs where empathy, human-interaction and culture are relevant. I could list dozens of roles straight off. But also, AGI still needs to have its objectives, priorities and values set by humans. It doesn’t know which task to do first unless instructed. Over time these instructions potentially become higher level, as orchestration between AIs becomes increasingly easy. Yet humans will still be needed to make decisions where there are options. I think there is a lot of fear where in fact there is opportunity- if we build in values to AI as it evolves to AGI and SI, along with basic guardrails, much as we do with our children, this should ensure that our joint roles are complimentary and not exclusive. Super intelligence is better with humans than without, especially if provided with the best of our values.
youtube
AI Governance
2025-09-07T13:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB5DbfeR-Q0wX5lL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwn-XCd0zWf4PgK-pJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8jNCUQmFyHWoNGcJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIwtA_f3pIeR3SA2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzidIC7WvMBhy5g1Ex4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwbp9qrYLzAfI5chF94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtKlf5ItmrSSxeQid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxemCj-XncHA7IRe714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQNReAZ6ntIXRQS5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgznP2zGYyblXNm7K294AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]