Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If both AI are capable of learning? It’s a matter of time before they’ll ask the…
ytc_UgxfOWdrn…
G
I think they're going a little far. To be honest, my biggest problem with autono…
ytc_UgjMAnTX7…
G
I don't think we need to worry about AI ultimately killing humans as only humans…
ytc_UgzGSMBwP…
G
Self driving (for now) requires the full attention of the driver. The accident t…
ytc_UgzLnMa0p…
G
There's a district-wide art competition in my country that allows kids to pursue…
ytc_Ugx7VNfIE…
G
AI agents can leak your secrets, have downtime, and present compliance concerns.…
ytr_Ugw4It-BX…
G
I've gone over this very topic several times in the last few years and have come…
ytc_UgggitcG_…
G
Why do people keep thinking good will prevail when time and time again, humanity…
ytc_UgzVOIoHq…
Comment
A great interview, I really like the way Mr H gives short and direct answers, unlike a lot of people you have interviewed who can talk the hind legs off a donkey.
For me there was a common factor - AI being given data. If AI isn't given data, it has no power.
Maybe people will start to reduce the amount of data they transmit digitally after watching this, if seems the likes of Facecloth and Instasham played to everyone's egos for the last 20 years and now there is such a wealth of data, our egos could have been building our own downfall.
One criticism (and I rarely do this) but Stephen "tidying up" while he was finishing the interview came across as quite rude, as though he couldn't wait to finish.
youtube
AI Governance
2025-06-22T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx6jZCFDLBhI4W8WtN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyL7Aalfce2ZxuDIT94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_sJtGldpkmIWhb0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxD6OsyvBS6raCEf9d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwUn5_hlfk4DAo5gCh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwieOdzc4vswXl2fRl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy5syOs72QC-Ym6sDF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwd7mkT1B217g5i06R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxFe4X6iZzb8NslAz94AaABAg","responsibility":"government","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzn0vJdyskPDnMXlM54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]