Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😂 meet George Jetson. Jane his wife, his boy Elroy, ****y'all remember that TV h…
ytc_UgzNOMuq8…
G
I am dreaming of going to a school like this but my dreams always fall apart…
ytc_UgyDAfYQc…
G
Robots aren't always as smart as they should be. What the robot didn't think of …
ytc_Ugxd3KYe4…
G
Ai generated images, or C.R.A.P as another comment said, lack emotion. They lack…
ytc_UgwX_NX_c…
G
We said this about industrialization but GDP per capita quadrupled. Automation i…
ytc_Ugzf5Q6r-…
G
I think AI has just three legitimate uses:
1. Funny "photorealistic" images (ex…
ytc_UgyCr4f9a…
G
Why do people hate ai art soooo much it saves money and it's good if your on a l…
ytc_UgyF6UWa7…
G
Wah???? There is literally NO WAY!!! AN AI MODEL COULD DO THIS!!! WHERE IN THE W…
ytc_UgxMwVAem…
Comment
A robot attacking a factory worker due to anger is unlikely, as robots don't possess emotions like humans do. However, a robot can be programmed to respond aggressively or defensively in certain situations, which may be misinterpreted as "anger."
Reasons for a robot to behave aggressively include:
1. Self-defense mechanisms: A robot may be designed to protect itself from harm or damage.
2. Programming errors: A robot's programming can include flawed logic or algorithms leading to unexpected behavior.
3. Sensor malfunctions: Faulty sensors can cause a robot to misinterpret its environment and react inappropriately.
4. Simulation or testing: Robots may be programmed to simulate aggressive behavior for testing or training purposes.
To prevent aggressive robot behavior, it's essential to:
1. Ensure proper programming and testing.
2. Implement safety protocols and fail-safes.
3. Regularly inspect and maintain robots.
4. Provide clear guidelines for human-robot interaction.
5. Continuously monitor and assess potential risks.
By addressing these factors, we can minimize the likelihood of robots behaving aggressively and ensure a safer working environment.
youtube
AI Harm Incident
2024-08-30T07:4…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgypOXu1zFCv81gVnkN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx75oi_wGEDueKikQp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyK66lnppROYBrhXBN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS6pjd1nU0-7gKdrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzm1hSCzjQcgd0blOh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymT4Gz5KtaAkvH-vd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwppL9zSvJC3vW6WqR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxbJiqYi0I8RD2wcPZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwR7hdnt8IUNbibHSd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzquP1fxzW_t2u1dXx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]