Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Holy shit I lived to see a robot two piece a human where are you god…
ytc_Ugw8cy3LB…
G
Sorry to tell you but Elon has his new robot coming out soon. September 30 is wh…
ytr_UgwItUZTy…
G
The model in our brain updates continuously. The AI model is static. Wouldn't yo…
ytc_UgzLelvZb…
G
Ai will get better but none of ais work could be as valuable as a real humans dr…
ytr_UgxnL9Bs3…
G
I use both Gpt 3.5 (Official web) and GPT 4 (on bing), GPT-4 is actually dumber …
rdc_jskpi0q
G
Dr Hinton has a very authoritarian political worldview ... casually mentioned we…
ytc_UgwQ3YoOI…
G
I'm not sure if you'll see this Hank, but I think some important questions to as…
ytc_UgzCQ-iUB…
G
The bigger companies grow the less productive they become. AI as a tool has noth…
ytc_Ugws0TYsg…
Comment
The concept missing here is that humans are already trying to use AI to detect intrusions on networks and vulnerabilities in applications. Aka, AI to detect AI's / or humans intrusion of these systems and applications. We're far more likely to see the use of AI as weapons or defense platforms against other AI weapons and defense platforms.
And if one AI sentient or not tries to exploit vulnerabilities, others will be trying to detect them.
youtube
AI Governance
2023-07-07T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy8IQGk-gYc18v63wh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCNVV5Ye3QYy-0lMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYKwuRzqibjB-3qlR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbFipqmcP0jz-mO0h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugz6wYlA8NXIjBrC9cR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwfNBG3U1oBk3Arq4x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz8uMkM2f5N7R2r7414AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxhCLfqTyqeuzU6fdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7fJajyaDSGWA0eGN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLPYrIxCrf2GTKCVd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"})