Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If a robot ever asks me, "Do I have a soul?" or, "Am I conscious?" without anyon…
ytc_UghtsfO07…
G
Making kids stupid. They can't read or write sentences. Take reasoning out of it…
ytc_UgzVxWpyh…
G
This is a popular topic but I don't believe AI will take over all jobs. It will …
ytc_UgwNZcOWd…
G
Ai is not stealing anything, copyright images are protected to not be used comme…
ytc_UgypFaCQy…
G
All tools are neutral, the stakeholders will dictate how things go (from their p…
ytc_UgyYYY7g4…
G
At some point they will create a super convoluted model, capable of really mimic…
ytc_Ugwq5pnGy…
G
I mean, I’ve seen quite a few deepfakes in my time, and it’s probably close to 1…
rdc_i6rknvo
G
This is one of those things where the human pride absolutely comes before the fa…
ytc_Ugy6_ixvw…
Comment
Love your content professor. You helped me tons through undergrad and I expect utilize your videos just as much in my grad program.
Just to play devil’s advocate, I think Elon used to be against the development of AI a while back, but he has since changed his stance. I believe Elon said that this is similar to a Pandora’s box situation, where it has already been opened and will never close again. He was essentially stating that even if we all stop improving and developing AI today, there will be someone who decides to ignore this and develop it (even for nefarious reasons). As a result, he chose to throw AI development at the forefront so that he could hopefully push AI toward a less “kill all humans” future and a more integrated and supportive tool for improving the best parts of humanity. What are your thoughts on that?
youtube
AI Governance
2025-08-26T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugyg2-qtHnYW_A_ooGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzLOVkNuJWyYDB9w5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzO3uHoGYVzPM75oCt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgybiouL0H_iKkWQJ0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6aZ4ZGrWD-uJeiYl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]