Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ah I'm sure they can find a way to make an AI that buys things.…
ytc_UgzlJRMFB…
G
Ok so if AI preferred black women no one would say a GOD DAMN thing....how f***e…
ytc_UgwfR7jS_…
G
I’m always worried about AI's impact on security, but Codoki's detailed analysis…
ytc_UgxlU2-2Z…
G
I trust her less then him.... for all we know they communicated in the cloud and…
ytc_UgxWra8vv…
G
Question for you. I've always been taught, even as a third year, to always read …
ytc_UgysTErsB…
G
We are not in danger of loosing everything, the miljonair and billionaire class …
ytc_UgxCfZ_2Y…
G
Thanks for this report. Very clear dangers with this AI "training techniques" an…
ytc_UgyQ_U6Ip…
G
I think level 2 self driving is more dangerous than anyother level. Its like eve…
ytc_UgzjT8Ye3…
Comment
The arguments are as old as the hills. There is nothing new here. It was amusing to see ChatGPT’s arguments were more nuanced than Peter’s and when ChatGPT asked some questions around how offence etc he really didn’t engage with ChatGPT. I’m sure Peter would argue that he was manipulating the argument and taking positions he normally wouldn’t take but he could not get ChatGPT to take a contrary/straw-man position on anything. If fact it was always taking a constructive approach. I would really like to see Peter take his own position and die on the hill with it.
I’d love to see how Jordan Peterson would go with a similar debate. My opinion is that he’d go ok for the 1st half and look like a total dick for the second half.
youtube
2024-06-04T02:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2_tKSgCZmUcBytyl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwiScqDPn83Gj8xEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVuZTVxNf9RtBkVyN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNzIGPOasV7xN92zp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzZFKkRQS850PJ44F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu5v85dXs83GW4pch4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYJjcMX9spuhjgZzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO4Ix92S9mWubHaM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxhhsigS4ok0uRReZx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY7wAb8HapdH_tvzd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]