Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have a question about AI that gives me peace but no one ever talks about
High…
ytc_UgxH2mgWI…
G
It seems like a lot of the things being credited to AI were already being done a…
ytc_UgxHN6KSU…
G
Been using Pneumatic Workflow to automate project tasks; its conditional logic r…
ytc_UgxfcELkL…
G
J'en profite pendant qu'il est temps, et qu'il est possible, de vous dire, que j…
ytc_UgyrP1n6F…
G
If I wanted to read a ChatGPT book review, I'd go ask ChatGPT myself instead of …
rdc_odhwy0q
G
It sounds like we need to lock tf in as a species, cultivate deeper philosophica…
ytc_UgzgwjDeM…
G
that first picture being AI scares me the most like now people can fake nostalgi…
ytc_Ugy0ZmNaA…
G
What on earth are you talking about? I can’t believe there are people that belie…
ytr_UgycTPldy…
Comment
In the video, Professor Stuart Russell discusses the importance of how we treat AI and the implications of our actions on their behavior. Just like a bulldog, AI systems can learn from their environment and the way they are treated. If we model respect, love, and compassion, there's a better chance these systems will reflect those values. However, if we act aggressively or dishonestly, AI might develop similar traits.
What do you think are some ways we can ensure AI learns positive behaviors from us?
youtube
AI Governance
2025-12-24T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxPQsdyAluv7VMoCrR4AaABAg.AR69GX83V25AR6pUTt-qOn","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwOcmcVu2iPdHdlvCN4AaABAg.AR5xe_lAh6IAR6qIoS7dFJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzact06OBboW8dqhJx4AaABAg.AR5o64wZkIbAR6r7UOX0_A","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyE_IycZrfNj7BkS7d4AaABAg.AR55LiodjTMAR6tE7XShKO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyoDhTaK6EUKRCYXr94AaABAg.AR4xL0I2opMAR6ti5FNiJO","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzaNVAyY0y-DJD6n7V4AaABAg.AR4x4RPSmaQAR6u_70y6po","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzJdW7OgJi0-Y3qAbJ4AaABAg.AR4uVOg4Ih5AR6v0nnDcRf","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyTRYHlxOAX69X5xCx4AaABAg.AR4t0NRFVlfAR6vjSzNSEF","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy9vuvVoeJfNPW6dCN4AaABAg.AR4rrOfAdBhAR6wS7YvCBl","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugwhiyj6dEolU9bUaix4AaABAg.AR4pq4vfXyVAR6xHWQ4Yfd","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]