The benchmark comprises of 161 programming problems Our analysis yields a novel robustness metric called clever, which is short for cross lipschitz extreme value for network robustness It requires full formal specs and proofs One common approach is training models to refuse unsafe queries, but this strategy can be vulnerable to clever prompts, often referred to as jailbreak attacks, which can trick the ai into providing harmful responses Our method, stair (safety alignment with introspective reasoning), guides models to think more carefully before responding. We use a clever technique that involves rotating the data within each layer of the model, making it easier to identify and keep only the most important parts for processing
This ensures that the model remains fast and efficient without losing much accuracy.
WATCH