Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm really not connecting the dots on how having robot overlords creates radical…
ytc_UgyGy2yb7…
G
This issue (what’s the point of college) is minimized by getting what many refer…
ytc_UgxyihqoF…
G
Life skills… I’m sure as they get to Junior High or High School, they’ll be teac…
ytc_Ugx4FIWnO…
G
Because the people who make these chatbots thinks that the whole job market is t…
ytr_UgytmkhMV…
G
@CC-ce6ngIf you think companies invest their money into AI only for data mining …
ytr_UgyjnIAIZ…
G
AI being used to spread misinformation is a greater threat than anything mention…
ytc_UgyQfu1ax…
G
I don't know if this is real or not but if it is, they done messed up by allowin…
ytc_UgwB67IJ4…
G
Sure. A lot of this smug stuff by artists is starting to feel like nothing more …
ytc_UgxtXMp1g…
Comment
About 1:24:00 Melanie Mitchell says something like
"The lawyer with AI couldn't outperform the other lawyer. Maybe AI will get better, but these assumptions are not obvious."
The assumption that AI will get better isnt obvious? I don't think it's a huge stretch to think AI will probably get better. That's hardly wild speculation.
I'm fairly optimistic, but this type if dismissal that AI could ever be a problem just seems naive. Of course there is hype and nonsense in the media, but there is also a lot of interesting work being done that shows incredible advancements in AI capability, and serious potential for harm because we dont entirely understand whats happening under the hood.
The deception point was not just one person being deceived at one point, there has been multiple studies that show powerful LLMs outputting stuff contrary the their own internal reasoning because they predict it will be received better. There is a pattern of calculating one thing, but saying another especially when they have already committed to an answer.
Maybe they are simply reflecting our own bias that is in the training data, our own propensity to lie when standing up for our beliefs. I dont know, but we cant just ignore it.
youtube
AI Governance
2023-06-30T12:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB3du30RGqEcCfiqR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGaW9p18AEp5IotE94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTGAyJNDRV4NCT8_l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgymYtKkvEojeCBNPM14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-I_5z2MH1F-xN_bt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwldkp-xVfE4OgJvBt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVWR6IKqbF38JBXhF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxkUDV7V45fai2Dgtx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCC1bMcxEIEk2suRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzUXE2d9iCAiRPKfyN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]