CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a computer-generated test that humans can pass but current computer systems cannot. CAPTCHA provide a method for automatically distinguishing a human from a computer program, and therefore can protect Web services from abuse by so-called bots. Most CAPTCHA consist of distorted images, usually text, for which a user must provide some description. Unfortunately, visual CAPTCHA limit access to the millions of visually impaired people using the Web. The Audio/Voice based CAPTCHA was created to solve this accessibility issue; however, the security of Audio based CAPTCHA was never formally tested. Some Visual CAPTCHA have been broken using machine learning techniques, and we propose using similar ideas to test the security of Audio based CAPTCHA. Audio-based CAPTCHA is generally composed of a set of words to be identified, layered on top of noise. To analyze the security of CAPTCHA it is essential to break it. This breaking of Audio based CAPTCHA has two steps first remove noise and then convert it to text. This paper addresses algorithm for adaptive noise reduction from Audio based CAPTCHA and thus in turn help to determine strength of CAPTCHA. The result shows accuracy up to 80% for Audio based CAPTCHA taken from popular Web sites. Such accuracy is enough to consider these CAPTCHA can be broken after converting to Text form. © 2011 IEEE.