Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've been seeing all the dangers from the beginning. And really ALL Ai generated…
ytc_Ugy9lx0Ep…
G
funny how they say ai art raised the bar when it actually did the opposite. i no…
ytc_UgyTc4gDq…
G
>Signing into law a bill requiring all vehicles manufactured after 2020 to be…
rdc_fasgop1
G
omg please place these robots with pedos so that they leave children alone and a…
ytc_UgzzG84Ak…
G
I personally see AI as both good and bad. What I mean by that is this:
Good:
G…
ytc_UgwHlgYWr…
G
I think most people are pretty satisfied. Hes running a minority government. So…
rdc_fn5fg07
G
So the AI is not quite like 'Data' from Star Trek TNG but once that's achieved t…
ytc_UgzldAFsY…
G
I am an AI enthusiast myself but I was thinking recently and I couldn't shake of…
ytc_UgwALsixk…
Comment
The video discusses the potential dangers of advanced AI and the importance of ensuring AI acts in humanity's best interests. Professor Stuart Russell emphasizes the need for safety measures but does not specifically mention Asimov's laws. It raises the question of whether such ethical guidelines could be developed for AGIs to prevent harm to humans and ensure they align with human values.
Could a modern version of Asimov's laws be effective in guiding the development of AGIs?
youtube
AI Governance
2025-12-24T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxPQsdyAluv7VMoCrR4AaABAg.AR69GX83V25AR6pUTt-qOn","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwOcmcVu2iPdHdlvCN4AaABAg.AR5xe_lAh6IAR6qIoS7dFJ","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugzact06OBboW8dqhJx4AaABAg.AR5o64wZkIbAR6r7UOX0_A","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgyE_IycZrfNj7BkS7d4AaABAg.AR55LiodjTMAR6tE7XShKO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyoDhTaK6EUKRCYXr94AaABAg.AR4xL0I2opMAR6ti5FNiJO","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzaNVAyY0y-DJD6n7V4AaABAg.AR4x4RPSmaQAR6u_70y6po","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzJdW7OgJi0-Y3qAbJ4AaABAg.AR4uVOg4Ih5AR6v0nnDcRf","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyTRYHlxOAX69X5xCx4AaABAg.AR4t0NRFVlfAR6vjSzNSEF","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytr_Ugy9vuvVoeJfNPW6dCN4AaABAg.AR4rrOfAdBhAR6wS7YvCBl","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugwhiyj6dEolU9bUaix4AaABAg.AR4pq4vfXyVAR6xHWQ4Yfd","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]