Compacted libs and made the code Java 7 compliant

Former-commit-id: 5ef92c478a977c02158a9829ae78972654f93873
This commit is contained in:
Ziver Koc 2015-12-07 17:00:47 +01:00
parent 9b9774e150
commit 5f28b017f7
797 changed files with 5 additions and 182054 deletions

BIN
lib/java-speech-api-master.jar Executable file

Binary file not shown.

View file

@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry kind="output" path="bin"/>
</classpath>

View file

@ -1,2 +0,0 @@
/bin
.classpath

View file

@ -1,16 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>java-speech-api-git</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments> </arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
</natures>
</projectDescription>

View file

@ -1,37 +0,0 @@
#Java-Speech-API Changelog
##Changelog
Changelog corresponds with a tagged and signed Git commit. This marks the changes.
A tagged commit may or may not have a corresponding binary version available.
Format: Tag: `<Corresponding Tag>`
* Version 1.15
* Optimized synthesiser class. Massive speed improvements on long input strings!
* Added experimental Duplex API in preparation for version 1.2 .
* Version 1.11 (Tag V1.100)
* Fixed major bug in Recognizer
* Version 1.10 (Tag v1.100)
* Added new Microphone Analyzer class.
* Added volume and frequency detection and frame work for (Voice Activity Detection)
* Microphone API updated to make it more usable.
* API re-branded as J.A.R.V.I.S. (Just A Reliable Vocal Interpreter & Synthesiser)
* Version 1.06 (Tag v1.016)
* Added support for synthesiser for strings longer than 100 characters (Credits to @Skylion007)
* Added support for synthesiser for multiple languages, accents, and voices. (Credits to @Skylion007)
* Added support for auto-detection of language within synthesiser. (Credits to @Skylion007)
* Version 1.05 (Tag: v1.015)
* Improved language support for recognizer (Credits to @duncanj)
* Add support for multiple responses for recognizer (Credits to @duncanj)
* Add profanity filter toggle support for recognizer (Credits to @duncanj)
* Version 1.01 (Tag: v1.01)
* Fixed state functions for Microphones
* Fixed encoding single byte frames
* Support Multiple Languages
* Version 1.00 (Tag: v1.00)
* Initial Release

View file

@ -1,23 +0,0 @@
#J.A.R.V.I.S. Speech API (Java-Speech API) Credits
##Credits
The following people/organizations have helped provide functionality for the API,
* JavaFlacEncoder Project
* Provided functionality to convert Wave files to FLAC format
* This allowed for the FLAC audio to be sent to Google to be "recognized"
* Created by Preston Lacey
* Homepage: http://sourceforge.net/projects/javaflacencoder/
* Google
* Provided functionality for two main API functions
* Recognizer
* Allows for speech audio to be recognized to text
* Synthesiser
* Allows for text to speech translation
* Homepage: http://google.com
* Princeton University
* The implemented FFT algorithm is derived from one on the university's website.
* Homepage: http://www.princeton.edu
We would like to thank the above so much for your work, this wrapper/API could not have been
created without it.

View file

@ -1,674 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
{one line to give the program's name and a brief idea of what it does.}
Copyright (C) {year} {name of author}
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
{project} Copyright (C) {year} {fullname}
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

View file

@ -1,30 +0,0 @@
#J.A.R.V.I.S. (Java-Speech-API)
J.A.R.V.I.S. Java Speech API: Just A Reliable Vocal Interpreter & Synthesizer.
This is a project for the Java Speech API. The program interprets vocal inputs into text and synthesizes voices from text input.
The program supports dozens of languages and even has the ability to auto-detect languages!
## Description
The J.A.R.V.I.S. Speech API is designed to be simple and efficient, using the speech engines created by Google
to provide functionality for parts of the API. Essentially, it is an API written in Java,
including a recognizer, synthesizer, and a microphone capture utility. The project uses
Google services for the synthesizer and recognizer. While this requires an Internet
connection, it provides a complete, modern, and fully functional speech API in Java.
##Features
The API currently provides the following functionality,
* Microphone Capture API (Wrapped around the current Java API for simplicity)
* A speech recognizer using Google's recognizer service
* Converts WAVE files from microphone input to FLAC (using existing API, see CREDITS)
* Retrieves Response from Google, including confidence score and text
* A speech synthesiser using Google's synthesizer service
* Retrieves synthesized text in an InputStream (MP3 data ready to be played)
* Wave to FLAC API (Wrapped around the used API in the project, javaFlacEncoder, see CREDITS)
* A translator using Google Translate (courtesy of Skylion's Google Toolkit)
##Changelog
See CHANGELOG.markdown for Version History/Changelog
##Credits
See CREDITS.markdown for Credits

View file

@ -1,13 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="JAVA_MODULE" version="4">
<component name="NewModuleRootManager" inherit-compiler-output="true">
<exclude-output />
<content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$/src" isTestSource="false" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
<orderEntry type="library" name="javaFlacEncoder-0.2" level="project" />
</component>
</module>

View file

@ -1,2 +0,0 @@
Manifest-Version: 1.0

View file

@ -1,224 +0,0 @@
package com.darkprograms.speech.microphone;
import javax.sound.sampled.*;
import java.io.Closeable;
import java.io.File;
/***************************************************************************
* Microphone class that contains methods to capture audio from microphone
*
* @author Luke Kuza, Aaron Gokaslan
***************************************************************************/
public class Microphone implements Closeable{
/**
* TargetDataLine variable to receive data from microphone
*/
private TargetDataLine targetDataLine;
/**
* Enum for current Microphone state
*/
public enum CaptureState {
PROCESSING_AUDIO, STARTING_CAPTURE, CLOSED
}
/**
* Variable for enum
*/
CaptureState state;
/**
* Variable for the audios saved file type
*/
private AudioFileFormat.Type fileType;
/**
* Variable that holds the saved audio file
*/
private File audioFile;
/**
* Gets the current state of Microphone
*
* @return PROCESSING_AUDIO is returned when the Thread is recording Audio and/or saving it to a file<br>
* STARTING_CAPTURE is returned if the Thread is setting variables<br>
* CLOSED is returned if the Thread is not doing anything/not capturing audio
*/
public CaptureState getState() {
return state;
}
/**
* Sets the current state of Microphone
*
* @param state State from enum
*/
private void setState(CaptureState state) {
this.state = state;
}
public File getAudioFile() {
return audioFile;
}
public void setAudioFile(File audioFile) {
this.audioFile = audioFile;
}
public AudioFileFormat.Type getFileType() {
return fileType;
}
public void setFileType(AudioFileFormat.Type fileType) {
this.fileType = fileType;
}
public TargetDataLine getTargetDataLine() {
return targetDataLine;
}
public void setTargetDataLine(TargetDataLine targetDataLine) {
this.targetDataLine = targetDataLine;
}
/**
* Constructor
*
* @param fileType File type to save the audio in<br>
* Example, to save as WAVE use AudioFileFormat.Type.WAVE
*/
public Microphone(AudioFileFormat.Type fileType) {
setState(CaptureState.CLOSED);
setFileType(fileType);
initTargetDataLine();
}
/**
* Initializes the target data line.
*/
private void initTargetDataLine(){
DataLine.Info dataLineInfo = new DataLine.Info(TargetDataLine.class, getAudioFormat());
try {
setTargetDataLine((TargetDataLine) AudioSystem.getLine(dataLineInfo));
} catch (LineUnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return;
}
}
/**
* Captures audio from the microphone and saves it a file
*
* @param audioFile The File to save the audio to
* @throws LineUnavailableException
* @throws Exception Throws an exception if something went wrong
*/
public void captureAudioToFile(File audioFile) throws LineUnavailableException {
setState(CaptureState.STARTING_CAPTURE);
setAudioFile(audioFile);
if(getTargetDataLine() == null){
initTargetDataLine();
}
//Get Audio
new Thread(new CaptureThread()).start();
}
/**
* Captures audio from the microphone and saves it a file
*
* @param audioFile The fully path (String) to a file you want to save the audio in
* @throws LineUnavailableException
* @throws Exception Throws an exception if something went wrong
*/
public void captureAudioToFile(String audioFile) throws LineUnavailableException {
File file = new File(audioFile);
captureAudioToFile(file);
}
/**
* The audio format to save in
*
* @return Returns AudioFormat to be used later when capturing audio from microphone
*/
public AudioFormat getAudioFormat() {
float sampleRate = 8000.0F;
//8000,11025,16000,22050,44100
int sampleSizeInBits = 16;
//8,16
int channels = 1;
//1,2
boolean signed = true;
//true,false
boolean bigEndian = false;
//true,false
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
}
/**
* Opens the microphone, starting the targetDataLine.
* If it's already open, it does nothing.
*/
public void open(){
if(getTargetDataLine()==null){
initTargetDataLine();
}
if(!getTargetDataLine().isOpen() && !getTargetDataLine().isRunning() && !getTargetDataLine().isActive()){
try {
setState(CaptureState.PROCESSING_AUDIO);
getTargetDataLine().open(getAudioFormat());
getTargetDataLine().start();
} catch (LineUnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return;
}
}
}
/**
* Close the microphone capture, saving all processed audio to the specified file.<br>
* If already closed, this does nothing
*/
public void close() {
if (getState() == CaptureState.CLOSED) {
} else {
getTargetDataLine().stop();
getTargetDataLine().close();
setState(CaptureState.CLOSED);
}
}
/**
* Thread to capture the audio from the microphone and save it to a file
*/
private class CaptureThread implements Runnable {
/**
* Run method for thread
*/
public void run() {
try {
AudioFileFormat.Type fileType = getFileType();
File audioFile = getAudioFile();
open();
AudioSystem.write(new AudioInputStream(getTargetDataLine()), fileType, audioFile);
//Will write to File until it's closed.
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
}

View file

@ -1,288 +0,0 @@
package com.darkprograms.speech.microphone;
import javax.sound.sampled.AudioFileFormat;
import com.darkprograms.speech.util.*;
/********************************************************************************************
* Microphone Analyzer class, detects pitch and volume while extending the microphone class.
* Implemented as a precursor to a Voice Activity Detection (VAD) algorithm.
* Currently can be used for audio data analysis.
* Dependencies: FFT.java & Complex.java. Both found in the utility package.
* @author Aaron Gokaslan
********************************************************************************************/
public class MicrophoneAnalyzer extends Microphone {
/**
* Constructor
* @param fileType The file type you want to save in. FLAC recommended.
*/
public MicrophoneAnalyzer(AudioFileFormat.Type fileType){
super(fileType);
}
/**
* Gets the volume of the microphone input
* Interval is 100ms so allow 100ms for this method to run in your code or specify smaller interval.
* @return The volume of the microphone input or -1 if data-line is not available
*/
public int getAudioVolume(){
return getAudioVolume(100);
}
/**
* Gets the volume of the microphone input
* @param interval: The length of time you would like to calculate the volume over in milliseconds.
* @return The volume of the microphone input or -1 if data-line is not available.
*/
public int getAudioVolume(int interval){
return calculateAudioVolume(this.getNumOfBytes(interval/1000d));
}
/**
* Gets the volume of microphone input
* @param numOfBytes The number of bytes you want for volume interpretation
* @return The volume over the specified number of bytes or -1 if data-line is unavailable.
*/
private int calculateAudioVolume(int numOfBytes){
byte[] data = getBytes(numOfBytes);
if(data==null)
return -1;
return calculateRMSLevel(data);
}
/**
* Calculates the volume of AudioData which may be buffered data from a data-line.
* @param audioData The byte[] you want to determine the volume of
* @return the calculated volume of audioData
*/
public static int calculateRMSLevel(byte[] audioData){
long lSum = 0;
for(int i=0; i<audioData.length; i++)
lSum = lSum + audioData[i];
double dAvg = lSum / audioData.length;
double sumMeanSquare = 0d;
for(int j=0; j<audioData.length; j++)
sumMeanSquare = sumMeanSquare + Math.pow(audioData[j] - dAvg, 2d);
double averageMeanSquare = sumMeanSquare / audioData.length;
return (int)(Math.pow(averageMeanSquare,0.5d) + 0.5);
}
/**
* Returns the number of bytes over interval for useful when figuring out how long to record.
* @param seconds The length in seconds
* @return the number of bytes the microphone will save.
*/
public int getNumOfBytes(int seconds){
return getNumOfBytes((double)seconds);
}
/**
* Returns the number of bytes over interval for useful when figuring out how long to record.
* @param seconds The length in seconds
* @return the number of bytes the microphone will output over the specified time.
*/
public int getNumOfBytes(double seconds){
return (int)(seconds*getAudioFormat().getSampleRate()*getAudioFormat().getFrameSize()+.5);
}
/**
* Returns the a byte[] containing the specified number of bytes
* @param numOfBytes The length of the returned array.
* @return The specified array or null if it cannot.
*/
private byte[] getBytes(int numOfBytes){
if(getTargetDataLine()!=null){
byte[] data = new byte[numOfBytes];
this.getTargetDataLine().read(data, 0, numOfBytes);
return data;
}
return null;//If data cannot be read, returns a null array.
}
/**
* Calculates the fundamental frequency. In other words, it calculates pitch,
* except pitch is far more subjective and subtle. Also note, that readings may occasionally,
* be in error due to the complex nature of sound. This feature is in Beta
* @return The frequency of the sound in Hertz.
*/
public int getFrequency(){
try {
return getFrequency(4096);
} catch (Exception e) {
//This will never happen. Ever...
return -666;
}
}
/**
* Calculates the frequency based off of the number of bytes.
* CAVEAT: THE NUMBER OF BYTES MUST BE A MULTIPLE OF 2!!!
* @param numOfBytes The number of bytes which must be a multiple of 2!!!
* @return The calculated frequency in Hertz.
*/
public int getFrequency(int numOfBytes) throws Exception{
if(getTargetDataLine() == null){
return -1;
}
byte[] data = new byte[numOfBytes+1];//One byte is lost during conversion
this.getTargetDataLine().read(data, 0, numOfBytes);
return getFrequency(data);
}
/**
* Calculates the frequency based off of the byte array,
* @param bytes The audioData you want to analyze
* @return The calculated frequency in Hertz.
*/
public int getFrequency(byte[] bytes){
double[] audioData = this.bytesToDoubleArray(bytes);
audioData = applyHanningWindow(audioData);
Complex[] complex = new Complex[audioData.length];
for(int i = 0; i<complex.length; i++){
complex[i] = new Complex(audioData[i], 0);
}
Complex[] fftTransformed = FFT.fft(complex);
return this.calculateFundamentalFrequency(fftTransformed, 4);
}
/**
* Applies a Hanning Window to the data set.
* Hanning Windows are used to increase the accuracy of the FFT.
* One should always apply a window to a dataset before applying an FFT
* @param The data you want to apply the window to
* @return The windowed data set
*/
private double[] applyHanningWindow(double[] data){
return applyHanningWindow(data, 0, data.length);
}
/**
* Applies a Hanning Window to the data set.
* Hanning Windows are used to increase the accuracy of the FFT.
* One should always apply a window to a dataset before applying an FFT
* @param The data you want to apply the window to
* @param The starting index you want to apply a window from
* @param The size of the window
* @return The windowed data set
*/
private double[] applyHanningWindow(double[] signal_in, int pos, int size){
for (int i = pos; i < pos + size; i++){
int j = i - pos; // j = index into Hann window function
signal_in[i] = (double)(signal_in[i] * 0.5 * (1.0 - Math.cos(2.0 * Math.PI * j / size)));
}
return signal_in;
}
/**
* This method calculates the fundamental frequency using Harmonic Product Specturm
* It down samples the FFTData four times and multiplies the arrays
* together to determine the fundamental frequency. This is slightly more computationally
* expensive, but much more accurate. In simpler terms, the function will remove the harmonic frequencies
* which occur at every N value by finding the lowest common divisor among them.
* @param fftData The array returned by the FFT
* @param N the number of times you wish to downsample.
* WARNING: The more times you downsample, the lower the maximum detectable frequency is.
* @return The fundamental frequency in Hertz
*/
private int calculateFundamentalFrequency(Complex[] fftData, int N){
if(N<=0 || fftData == null){ return -1; } //error case
final int LENGTH = fftData.length;//Used to calculate bin size
fftData = removeNegativeFrequencies(fftData);
Complex[][] data = new Complex[N][fftData.length/N];
for(int i = 0; i<N; i++){
for(int j = 0; j<data[0].length; j++){
data[i][j] = fftData[j*(i+1)];
}
}
Complex[] result = new Complex[fftData.length/N];//Combines the arrays
for(int i = 0; i<result.length; i++){
Complex tmp = new Complex(1,0);
for(int j = 0; j<N; j++){
tmp = tmp.times(data[j][i]);
}
result[i] = tmp;
}
int index = this.findMaxMagnitude(result);
return index*getFFTBinSize(LENGTH);
}
/**
* Removes useless data from transform since sound doesn't use complex numbers.
* @param The data you want to remove the complex transforms from
* @return The cleaned data
*/
private Complex[] removeNegativeFrequencies(Complex[] c){
Complex[] out = new Complex[c.length/2];
for(int i = 0; i<out.length; i++){
out[i] = c[i];
}
return out;
}
/**
* Calculates the FFTbin size based off the length of the the array
* Each FFTBin size represents the range of frequencies treated as one.
* For example, if the bin size is 5 then the algorithm is precise to within 5hz.
* Precondition: length cannot be 0.
* @param fftDataLength The length of the array used to feed the FFT algorithm
* @return FFTBin size
*/
private int getFFTBinSize(int fftDataLength){
return (int)(getAudioFormat().getSampleRate()/fftDataLength+.5);
}
/**
* Calculates index of the maximum magnitude in a complex array.
* @param The Complex[] you want to get max magnitude from.
* @return The index of the max magnitude
*/
private int findMaxMagnitude(Complex[] input){
//Calculates Maximum Magnitude of the array
double max = Double.MIN_VALUE;
int index = -1;
for(int i = 0; i<input.length; i++){
Complex c = input[i];
double tmp = c.getMagnitude();
if(tmp>max){
max = tmp;;
index = i;
}
}
return index;
}
/**
* Converts bytes from a TargetDataLine into a double[] allowing the information to be read.
* NOTE: One byte is lost in the conversion so don't expect the arrays to be the same length!
* @param bufferData The buffer read in from the target data line
* @return The double[] that the buffer has been converted into.
*/
private double[] bytesToDoubleArray(byte[] bufferData){
final int bytesRecorded = bufferData.length;
final int bytesPerSample = getAudioFormat().getSampleSizeInBits()/8;
final double amplification = 100.0; // choose a number as you like
double[] micBufferData = new double[bytesRecorded - bytesPerSample +1];
for (int index = 0, floatIndex = 0; index < bytesRecorded - bytesPerSample + 1; index += bytesPerSample, floatIndex++) {
double sample = 0;
for (int b = 0; b < bytesPerSample; b++) {
int v = bufferData[index + b];
if (b < bytesPerSample - 1 || bytesPerSample == 1) {
v &= 0xFF;
}
sample += v << (b * 8);
}
double sample32 = amplification * (sample / 32768.0);
micBufferData[floatIndex] = sample32;
}
return micBufferData;
}
}

View file

@ -1,120 +0,0 @@
package com.darkprograms.speech.recognizer;
import javaFlacEncoder.FLACEncoder;
import javaFlacEncoder.FLACFileOutputStream;
import javaFlacEncoder.StreamConfiguration;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import java.io.File;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
/*************************************************************************************************************
* Class that contains methods to encode Wave files to FLAC files
* THIS IS THANKS TO THE javaFlacEncoder Project created here: http://sourceforge.net/projects/javaflacencoder/
************************************************************************************************************/
public class FlacEncoder {
/**
* Constructor
*/
public FlacEncoder() {
}
/**
* Converts a wave file to a FLAC file(in order to POST the data to Google and retrieve a response) <br>
* Sample Rate is 8000 by default
*
* @param inputFile Input wave file
* @param outputFile Output FLAC file
*/
public void convertWaveToFlac(File inputFile, File outputFile) {
StreamConfiguration streamConfiguration = new StreamConfiguration();
streamConfiguration.setSampleRate(8000);
streamConfiguration.setBitsPerSample(16);
streamConfiguration.setChannelCount(1);
try {
AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(inputFile);
AudioFormat format = audioInputStream.getFormat();
int frameSize = format.getFrameSize();
FLACEncoder flacEncoder = new FLACEncoder();
FLACFileOutputStream flacOutputStream = new FLACFileOutputStream(outputFile);
flacEncoder.setStreamConfiguration(streamConfiguration);
flacEncoder.setOutputStream(flacOutputStream);
flacEncoder.openFLACStream();
int frameLength = (int) audioInputStream.getFrameLength();
if(frameLength <= AudioSystem.NOT_SPECIFIED){
frameLength = 16384;//Arbitrary file size
}
int[] sampleData = new int[frameLength];
byte[] samplesIn = new byte[frameSize];
int i = 0;
while (audioInputStream.read(samplesIn, 0, frameSize) != -1) {
if (frameSize != 1) {
ByteBuffer bb = ByteBuffer.wrap(samplesIn);
bb.order(ByteOrder.LITTLE_ENDIAN);
short shortVal = bb.getShort();
sampleData[i] = shortVal;
} else {
sampleData[i] = samplesIn[0];
}
i++;
}
sampleData = truncateNullData(sampleData, i);
flacEncoder.addSamples(sampleData, i);
flacEncoder.encodeSamples(i, false);
flacEncoder.encodeSamples(flacEncoder.samplesAvailableToEncode(), true);
audioInputStream.close();
flacOutputStream.close();
} catch (Exception ex) {
ex.printStackTrace();
}
}
/**
* Converts a wave file to a FLAC file(in order to POST the data to Google and retrieve a response) <br>
* Sample Rate is 8000 by default
*
* @param inputFile Input wave file
* @param outputFile Output FLAC file
*/
public void convertWaveToFlac(String inputFile, String outputFile) {
convertWaveToFlac(new File(inputFile), new File(outputFile));
}
/**
* Used for when the frame length is unknown to shorten the array to prevent huge blank end space
* @param sampleData The int[] array you want to shorten
* @param index The index you want to shorten it to
* @return The shortened array
*/
private int[] truncateNullData(int[] sampleData, int index){
if(index == sampleData.length) return sampleData;
int[] out = new int[index];
for(int i = 0; i<index; i++){
out[i] = sampleData[i];
}
return out;
}
}

View file

@ -1,524 +0,0 @@
package com.darkprograms.speech.recognizer;
import java.io.File;
import java.io.IOException;
import java.io.OutputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import java.nio.file.Files;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import javaFlacEncoder.FLACFileWriter;
import javax.net.ssl.HttpsURLConnection;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.TargetDataLine;
import com.darkprograms.speech.util.ChunkedOutputStream;
import com.darkprograms.speech.util.StringUtil;
/**
* A class for using Google's Duplex Speech API. Allows for continuous recognition. Requires an API-Key.
* A duplex API opens two connections. One to an upstream and one to a downstream. The system allows
* for continuous chunking on both up and downstream. This, in turn, allows for Google to return data
* as data is sent to it. For this reason, this class uses listeners.
* @author Skylion (Aaron Gokaslan), Robert Rowntree.
*/
public class GSpeechDuplex{
//TODO Cleanup Printlns
/**
* Minimum value for ID
*/
private static final long MIN = 10000000;
/**
* Maximum value for ID
*/
private static final long MAX = 900000009999999L;
/**
* The base URL for the API
*/
private static final String GOOGLE_DUPLEX_SPEECH_BASE = "https://www.google.com/speech-api/full-duplex/v1/";
/**
* Stores listeners
*/
private List<GSpeechResponseListener> responseListeners = new ArrayList<GSpeechResponseListener>();
/**
* User defined API-KEY
*/
private final String API_KEY;
/**
* User-defined language
*/
private String language = "auto";
/**
* The maximum size the API will tolerate
*/
private final static int MAX_SIZE = 1048576;
/**
* Per specification, the final chunk of in a ChunkedOutputStream
*/
private final static byte[] FINAL_CHUNK = new byte[] { '0', '\r', '\n', '\r', '\n' };
/**
* Constructor
* @param API_KEY The API-Key for Google's Speech API. An API key can be obtained by requesting
* one by following the process shown at this
* <a href="http://www.chromium.org/developers/how-tos/api-keys">url</a>.
*/
public GSpeechDuplex(String API_KEY){
this.API_KEY = API_KEY;
}
/**
* Temporary will be deprecated before release
*/
public String getLanguage(){
return language;
}
/**
* Temporary will be deprecated before release
*/
public void setLanguage(String language){
this.language = language;
}
/**
* Send a FLAC file with the specified sampleRate to the Duplex API
* @param flacFile The file you wish to upload.
* NOTE: Segment the file if duration is greater than 15 seconds.
* @param sampleRate The sample rate of the file.
* @throws IOException If something has gone wrong with reading the file
*/
public void recognize(File flacFile, int sampleRate) throws IOException{
recognize(mapFileIn(flacFile), sampleRate);
}
/**
* Send a byte[] to the URL with a specified sampleRate.
* NOTE: The byte[] should contain no more than 15 seconds of audio.
* Chunking is not fully implemented as of yet. Will not string data together for context yet.
* @param data The byte[] you want to send.
* @param sampleRate The sample rate of aforementioned byte array.
*/
public void recognize(byte[] data, int sampleRate){
if(data.length >= MAX_SIZE){//Temporary Chunking. Does not allow for Google to gather context.
System.out.println("Chunking the audio into smaller parts...");
byte[][] dataArray = chunkAudio(data);
for(byte[]array: dataArray){
recognize(array, sampleRate);
}
}
//Generates a unique ID for the response.
final long PAIR = MIN + (long)(Math.random() * ((MAX - MIN) + 1L));
//Generates the Downstream URL
final String API_DOWN_URL = GOOGLE_DUPLEX_SPEECH_BASE + "down?maxresults=1&pair=" + PAIR;
//Generates the Upstream URL
final String API_UP_URL = GOOGLE_DUPLEX_SPEECH_BASE +
"up?lang=" + language + "&lm=dictation&client=chromium&pair=" + PAIR +
"&key=" + API_KEY ;
//Opens downChannel
this.downChannel(API_DOWN_URL);
//Opens upChannel
this.upChannel(API_UP_URL, chunkAudio(data), sampleRate);
}
/**
* This method allows you to stream a continuous stream of data to the API.
* <p>Note: This feature is experimental.</p>
* @param tl
* @param af
* @throws IOException
* @throws LineUnavailableException
*/
public void recognize(TargetDataLine tl, AudioFormat af) throws IOException, LineUnavailableException{
//Generates a unique ID for the response.
final long PAIR = MIN + (long)(Math.random() * ((MAX - MIN) + 1L));
//Generates the Downstream URL
final String API_DOWN_URL = GOOGLE_DUPLEX_SPEECH_BASE + "down?maxresults=1&pair=" + PAIR;
//Generates the Upstream URL
final String API_UP_URL = GOOGLE_DUPLEX_SPEECH_BASE +
"up?lang=" + language + "&lm=dictation&client=chromium&pair=" + PAIR +
"&key=" + API_KEY + "&continuous"; //Tells Google to constantly monitor the stream;
//TODO Add implementation that sends feedback in real time. Protocol buffers will be necessary.
//Opens downChannel
this.downChannel(API_DOWN_URL);
//Opens upChannel
this.upChannel(API_UP_URL, tl, af);
}
/**
* This code opens a new Thread that connects to the downstream URL. Due to threading,
* the best way to handle this is through the use of listeners.
* @param The URL you want to connect to.
*/
private void downChannel(String urlStr) {
final String url = urlStr;
new Thread ("Downstream Thread") {
public void run() {
// handler for DOWN channel http response stream - httpsUrlConn
// response handler should manage the connection.... ??
// assign a TIMEOUT Value that exceeds by a safe factor
// the amount of time that it will take to write the bytes
// to the UPChannel in a fashion that mimics a liveStream
// of the audio at the applicable Bitrate. BR=sampleRate * bits per sample
// Note that the TLS session uses "* SSLv3, TLS alert, Client hello (1): "
// to wake up the listener when there are additional bytes.
// The mechanics of the TLS session should be transparent. Just use
// httpsUrlConn and allow it enough time to do its work.
Scanner inStream = openHttpsConnection(url);
if(inStream == null){
//ERROR HAS OCCURED
}
while(inStream.hasNextLine()){
String response = inStream.nextLine();
System.out.println("Response: "+response);
if(response.length()>17){//Prevents blank responses from Firing
GoogleResponse gr = new GoogleResponse();
parseResponse(response, gr);
fireResponseEvent(gr);
}
}
inStream.close();
System.out.println("Finished write on down stream...");
}
}.start();
}
/**
* Used to initiate the URL chunking for the upChannel.
* @param urlStr The URL string you want to upload 2
* @param data The data you want to send to the URL
* @param sampleRate The specified sample rate of the data.
*/
private void upChannel(String urlStr, byte[][] data, int sampleRate) {
final String murl = urlStr;
final byte[][] mdata = data;
final int mSampleRate = sampleRate;
new Thread ("Upstream File Thread") {
public void run() {
openHttpsPostConnection(murl, mdata, mSampleRate);
//Google does not return data via this URL
}
}.start();
}
/**
* Streams data from the TargetDataLine to the API.
* @param urlStr The URL to stream to
* @param tl The target data line to stream from.
* @param af The AudioFormat to stream with.
* @throws LineUnavailableException If cannot open or stream the TargetDataLine.
*/
private void upChannel(String urlStr, TargetDataLine tl, AudioFormat af) throws LineUnavailableException{
final String murl = urlStr;
final TargetDataLine mtl = tl;
final AudioFormat maf = af;
if(!mtl.isOpen()){
mtl.open(maf);
mtl.start();
}
new Thread ("Upstream Thread") {
public void run() {
openHttpsPostConnection(murl, mtl, maf);
}
}.start();
}
/**
* Opens a HTTPS connection to the specified URL string
* @param urlStr The URL you want to visit
* @return The Scanner to access aforementioned data.
*/
private Scanner openHttpsConnection(String urlStr) {
int resCode = -1;
try {
URL url = new URL(urlStr);
URLConnection urlConn = url.openConnection();
if (!(urlConn instanceof HttpsURLConnection)) {
throw new IOException ("URL is not an Https URL");
}
HttpsURLConnection httpConn = (HttpsURLConnection)urlConn;
httpConn.setAllowUserInteraction(false);
// TIMEOUT is required
httpConn.setInstanceFollowRedirects(true);
httpConn.setRequestMethod("GET");
httpConn.connect();
resCode = httpConn.getResponseCode();
if (resCode == HttpsURLConnection.HTTP_OK) {
return new Scanner(httpConn.getInputStream());
}
else{
System.out.println("Error: " + resCode);
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
/**
* Opens a HTTPSPostConnection that posts data from a TargetDataLine input
* @param murl The URL you want to post to.
* @param mtl The TargetDataLine you want to post data from. <b>Note should be open</b>
* @param maf The AudioFormat of the data you want to post
*/
private void openHttpsPostConnection(final String murl,
final TargetDataLine mtl, final AudioFormat maf) {
URL url;
try {
url = new URL(murl);
URLConnection urlConn = url.openConnection();
if (!(urlConn instanceof HttpsURLConnection)) {
throw new IOException ("URL is not an Https URL");
}
HttpsURLConnection httpConn = (HttpsURLConnection)urlConn;
httpConn.setAllowUserInteraction(false);
httpConn.setInstanceFollowRedirects(true);
httpConn.setRequestMethod("POST");
httpConn.setDoOutput(true);
httpConn.setChunkedStreamingMode(0);
httpConn.setRequestProperty("Transfer-Encoding", "chunked");
httpConn.setRequestProperty("Content-Type", "audio/x-flac; rate=" + (int)maf.getSampleRate());
// also worked with ("Content-Type", "audio/amr; rate=8000");
httpConn.connect();
// this opens a connection, then sends POST & headers.
OutputStream out = httpConn.getOutputStream();
//Note : if the audio is more than 15 seconds
// dont write it to UrlConnInputStream all in one block as this sample does.
// Rather, segment the byteArray and on intermittently, sleeping thread
// supply bytes to the urlConn Stream at a rate that approaches
// the bitrate ( =30K per sec. in this instance ).
System.out.println("Starting to write data to output...");
AudioInputStream ais = new AudioInputStream(mtl);
ChunkedOutputStream os = new ChunkedOutputStream(out);
AudioSystem.write(ais, FLACFileWriter.FLAC, os);
out.write(FINAL_CHUNK);
System.out.println("IO WRITE DONE");
out.close();
// do you need the trailer?
// NOW you can look at the status.
int resCode = httpConn.getResponseCode();
if (resCode / 100 != 2) {
System.out.println("ERROR");
}
}catch(Exception ex){
ex.printStackTrace();
}
}
/**
* Opens a chunked HTTPS post connection and returns a Scanner with incoming data from Google Server
* Used for to get UPStream
* Chunked HTTPS ensures unlimited file size.
* @param urlStr The String for the URL
* @param data The data you want to send the server
* @param sampleRate The sample rate of the flac file.
* @return A Scanner to access the server response. (Probably will never be used)
*/
private Scanner openHttpsPostConnection(String urlStr, byte[][] data, int sampleRate){
byte[][] mextrad = data;
int resCode = -1;
OutputStream out = null;
// int http_status;
try {
URL url = new URL(urlStr);
URLConnection urlConn = url.openConnection();
if (!(urlConn instanceof HttpsURLConnection)) {
throw new IOException ("URL is not an Https URL");
}
HttpsURLConnection httpConn = (HttpsURLConnection)urlConn;
httpConn.setAllowUserInteraction(false);
httpConn.setInstanceFollowRedirects(true);
httpConn.setRequestMethod("POST");
httpConn.setDoOutput(true);
httpConn.setChunkedStreamingMode(0);
httpConn.setRequestProperty("Transfer-Encoding", "chunked");
httpConn.setRequestProperty("Content-Type", "audio/x-flac; rate=" + sampleRate);
// also worked with ("Content-Type", "audio/amr; rate=8000");
httpConn.connect();
try {
// this opens a connection, then sends POST & headers.
out = httpConn.getOutputStream();
//Note : if the audio is more than 15 seconds
// dont write it to UrlConnInputStream all in one block as this sample does.
// Rather, segment the byteArray and on intermittently, sleeping thread
// supply bytes to the urlConn Stream at a rate that approaches
// the bitrate ( =30K per sec. in this instance ).
System.out.println("Starting to write");
for(byte[] dataArray: mextrad){
out.write(dataArray); // one big block supplied instantly to the underlying chunker wont work for duration > 15 s.
try {
Thread.sleep(1000);//Delays the Audio so Google thinks its a mic.
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
out.write(FINAL_CHUNK);
System.out.println("IO WRITE DONE");
// do you need the trailer?
// NOW you can look at the status.
resCode = httpConn.getResponseCode();
if (resCode / 100 != 2) {
System.out.println("ERROR");
}
} catch (IOException e) {
}
if (resCode == HttpsURLConnection.HTTP_OK) {
return new Scanner(httpConn.getInputStream());
}
else{
System.out.println("HELP: " + resCode);
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
/**
* Converts the file into a byte[]. Also Android compatible. :)
* @param The File you want to get the byte[] from.
* @return The byte[]
* @throws IOException if something goes wrong in reading the file.
*/
private byte[] mapFileIn(File infile) throws IOException{
return Files.readAllBytes(infile.toPath());
}
/**
* Parses the String into a GoogleResponse object
* @param rawResponse The String you want to parse
* @param gr the GoogleResponse object to save the data into.
*/
private void parseResponse(String rawResponse, GoogleResponse gr){
if(rawResponse == null || !rawResponse.contains("\"result\"")
|| rawResponse.equals("{\"result\":[]}")){ return; }
if(rawResponse.contains("\"confidence\":")){
String confidence = StringUtil.substringBetween(rawResponse, "\"confidence\":", "}");
gr.setConfidence(confidence);
}
else{
gr.setConfidence(String.valueOf(1d));
}
String array = StringUtil.trimString(rawResponse, "[", "]");
if(array.contains("[")){
array = StringUtil.trimString(array, "[", "]");
}
if(array.contains("\"confidence\":")){//Removes confidence phrase if it exists.
array = array.substring(0, array.lastIndexOf(','));
}
String[] parts = array.split(",");
gr.setResponse(parseTranscript(parts[0]));
for(int i = 1; i<parts.length; i++){
gr.getOtherPossibleResponses().add(parseTranscript(parts[i]));
}
}
/**
* Parses each individual "transcript" phrase
* @param The string fragment to parse
* @return The parsed String
*/
private String parseTranscript(String s){
String tmp = s.substring(s.indexOf(":")+1);
if(s.endsWith("}")){
tmp = tmp.substring(0, tmp.length()-1);
}
tmp = StringUtil.stripQuotes(tmp);
if(tmp.charAt(0)==' '){//Removes space at beginning if it exists
tmp = tmp.substring(1);
}
return tmp;
}
/**
* Adds GSpeechResponse Listeners that fire when Google sends a response.
* @param The Listeners you want to add
*/
public synchronized void addResponseListener(GSpeechResponseListener rl){
responseListeners.add(rl);
}
/**
* Removes GSpeechResponseListeners that fire when Google sends a response.
* @param rl
*/
public synchronized void removeResponseListener(GSpeechResponseListener rl){
responseListeners.remove(rl);
}
/**
* Fires responseListeners
* @param gr The Google Response (in this case the response event).
*/
private synchronized void fireResponseEvent(GoogleResponse gr){
for(GSpeechResponseListener gl: responseListeners){
gl.onResponse(gr);
}
}
/**
* Chunks audio into smaller chunks to stream to the duplex API
* @param data The data you want to break into smaller pieces
* @return the byte[][] containing on array of chunks.
*/
private byte[][] chunkAudio(byte[] data) {
if(data.length >= MAX_SIZE){//If larger than 1MB
int frame = MAX_SIZE/2;
int numOfChunks = (int)(data.length/((double)frame)) + 1;
byte[][] data2D = new byte[numOfChunks][];
for(int i = 0, j = 0; i<data.length && j<data2D.length; i+=frame, j++){
int length = (data.length - i < frame)? data.length - i: frame;
System.out.println("LENGTH: " + length);
data2D[j] = new byte[length];
System.arraycopy(data, i, data2D[j], 0, length);
}
return data2D;
}
else{
byte[][] tmpData = new byte[1][data.length];
System.arraycopy(data, 0, tmpData[0], 0, data.length);
return tmpData;
}
}
}

View file

@ -1,12 +0,0 @@
package com.darkprograms.speech.recognizer;
/**
* Response listeners for URL connections.
* @author Skylion
*
*/
public interface GSpeechResponseListener {
public void onResponse(GoogleResponse gr);
}

View file

@ -1,89 +0,0 @@
package com.darkprograms.speech.recognizer;
import java.util.ArrayList;
import java.util.List;
/******************************************************************************
* Class that holds the response and confidence of a Google recognizer request
*
* @author Luke Kuza, Duncan Jauncey, Aaron Gokaslan
******************************************************************************/
public class GoogleResponse {
/**
* Variable that holds the response
*/
private String response;
/**
* Variable that holds the confidence score
*/
private String confidence;
/**
* List that holds other possible responses for this request.
*/
private List<String> otherPossibleResponses = new ArrayList<String>(20);
/**
* Constructor
*/
public GoogleResponse() {
}
/**
* Gets the response text of what was said in the submitted Audio to Google
*
* @return String representation of what was said
*/
public String getResponse() {
return response;
}
/**
* Set the response
*
* @param response The response
*/
protected void setResponse(String response) {
this.response = response;
}
/**
* Gets the confidence score for the specific request
*
* @return The confidence score, ex .922343324323
*/
public String getConfidence() {
return confidence;
}
/**
* Set the confidence score for this request
*
* @param confidence The confidence score
*/
protected void setConfidence(String confidence) {
this.confidence = confidence;
}
/**
* Get other possible responses for this request.
* @return other possible responses
*/
public List<String> getOtherPossibleResponses() {
return otherPossibleResponses;
}
/**
* Gets all returned responses for this request
* @return All returned responses
*/
public List<String> getAllPossibleResponses() {
List<String> tmp = otherPossibleResponses;
tmp.add(0,response);
return tmp;
}
}

View file

@ -1,466 +0,0 @@
package com.darkprograms.speech.recognizer;
import java.io.*;
import java.net.URL;
import java.net.URLConnection;
import java.nio.charset.Charset;
import com.darkprograms.speech.util.StringUtil;
/***************************************************************
* Class that submits FLAC audio and retrieves recognized text
*
* @author Luke Kuza, Duncan Jauncey, Aaron Gokaslan
**************************************************************/
@Deprecated
public class Recognizer {
@Deprecated
public enum Languages{
AUTO_DETECT("auto"),//tells Google to auto-detect the language
ARABIC_JORDAN("ar-JO"),
ARABIC_LEBANON("ar-LB"),
ARABIC_QATAR("ar-QA"),
ARABIC_UAE("ar-AE"),
ARABIC_MOROCCO("ar-MA"),
ARABIC_IRAQ("ar-IQ"),
ARABIC_ALGERIA("ar-DZ"),
ARABIC_BAHRAIN("ar-BH"),
ARABIC_LYBIA("ar-LY"),
ARABIC_OMAN("ar-OM"),
ARABIC_SAUDI_ARABIA("ar-SA"),
ARABIC_TUNISIA("ar-TN"),
ARABIC_YEMEN("ar-YE"),
BASQUE("eu"),
CATALAN("ca"),
CZECH("cs"),
DUTCH("nl-NL"),
ENGLISH_AUSTRALIA("en-AU"),
ENGLISH_CANADA("en-CA"),
ENGLISH_INDIA("en-IN"),
ENGLISH_NEW_ZEALAND("en-NZ"),
ENGLISH_SOUTH_AFRICA("en-ZA"),
ENGLISH_UK("en-GB"),
ENGLISH_US("en-US"),
FINNISH("fi"),
FRENCH("fr-FR"),
GALICIAN("gl"),
GERMAN("de-DE"),
HEBREW("he"),
HUNGARIAN("hu"),
ICELANDIC("is"),
ITALIAN("it-IT"),
INDONESIAN("id"),
JAPANESE("ja"),
KOREAN("ko"),
LATIN("la"),
CHINESE_SIMPLIFIED("zh-CN"),
CHINESE_TRANDITIONAL("zh-TW"),
CHINESE_HONGKONG("zh-HK"),
CHINESE_CANTONESE("zh-yue"),
MALAYSIAN("ms-MY"),
NORWEGIAN("no-NO"),
POLISH("pl"),
PIG_LATIN("xx-piglatin"),
PORTUGUESE("pt-PT"),
PORTUGUESE_BRASIL("pt-BR"),
ROMANIAN("ro-RO"),
RUSSIAN("ru"),
SERBIAN("sr-SP"),
SLOVAK("sk"),
SPANISH_ARGENTINA("es-AR"),
SPANISH_BOLIVIA("es-BO"),
SPANISH_CHILE("es-CL"),
SPANISH_COLOMBIA("es-CO"),
SPANISH_COSTA_RICA("es-CR"),
SPANISH_DOMINICAN_REPUBLIC("es-DO"),
SPANISH_ECUADOR("es-EC"),
SPANISH_EL_SALVADOR("es-SV"),
SPANISH_GUATEMALA("es-GT"),
SPANISH_HONDURAS("es-HN"),
SPANISH_MEXICO("es-MX"),
SPANISH_NICARAGUA("es-NI"),
SPANISH_PANAMA("es-PA"),
SPANISH_PARAGUAY("es-PY"),
SPANISH_PERU("es-PE"),
SPANISH_PUERTO_RICO("es-PR"),
SPANISH_SPAIN("es-ES"),
SPANISH_US("es-US"),
SPANISH_URUGUAY("es-UY"),
SPANISH_VENEZUELA("es-VE"),
SWEDISH("sv-SE"),
TURKISH("tr"),
ZULU("zu");
//TODO Clean Up JavaDoc for Overloaded Methods using @link
/**
*Stores the LanguageCode
*/
private final String languageCode;
/**
*Constructor
*/
private Languages(final String languageCode){
this.languageCode = languageCode;
}
public String toString(){
return languageCode;
}
}
/**
* URL to POST audio data and retrieve results
*/
private static final String GOOGLE_RECOGNIZER_URL = "https://www.google.com/speech-api/v1/recognize?xjerr=1&client=chromium";
private boolean profanityFilter = true;
private String language = null;
/**
* Constructor
*/
public Recognizer() {
this.setLanguage(Languages.AUTO_DETECT);
}
/**
* Constructor
* @param Language
*/
@Deprecated
public Recognizer(String language) {
this.language = language;
}
/**
* Constructor
* @param language The Languages class for the language you want to designate
*/
public Recognizer(Languages language){
this.language = language.languageCode;
}
/**
* Constructor
* @param profanityFilter
*/
public Recognizer(boolean profanityFilter){
this.profanityFilter = profanityFilter;
}
/**
* Constructor
* @param language
* @param profanityFilter
*/
@Deprecated
public Recognizer(String language, boolean profanityFilter){
this.language = language;
this.profanityFilter = profanityFilter;
}
/**
* Constructor
* @param language
* @param profanityFilter
*/
public Recognizer(Languages language, boolean profanityFilter){
this.language = language.languageCode;
this.profanityFilter = profanityFilter;
}
/**
* Language: Contains all supported languages for Google Speech to Text.
* Setting this to null will make Google use it's own language detection.
* This value is null by default.
* @param language
*/
public void setLanguage(Languages language) {
this.language = language.languageCode;
}
/**Language code. This language code must match the language of the speech to be recognized. ex. en-US ru-RU
* This value is null by default.
* @param language The language code.
*/
@Deprecated
public void setLanguage(String language) {
this.language = language;
}
/**
* Returns the state of profanityFilter
* which enables/disables Google's profanity filter (on by default).
* @return profanityFilter
*/
public boolean getProfanityFilter(){
return profanityFilter;
}
/**
* Language code. This language code must match the language of the speech to be recognized. ex. en-US ru-RU
* This value is null by default.
* @return language the Google language
*/
public String getLanguage(){
return language;
}
/**
* Get recognized data from a Wave file. This method will encode the wave file to a FLAC file
*
* @param waveFile Wave file to recognize
* @param maxResults Maximum number of results to return in response
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForWave(File waveFile, int maxResults) throws IOException{
FlacEncoder flacEncoder = new FlacEncoder();
File flacFile = new File(waveFile + ".flac");
flacEncoder.convertWaveToFlac(waveFile, flacFile);
GoogleResponse googleResponse = getRecognizedDataForFlac(flacFile, maxResults, 8000);
//Delete converted FLAC data
flacFile.delete();
return googleResponse;
}
/**
* Get recognized data from a Wave file. This method will encode the wave file to a FLAC
*
* @param waveFile Wave file to recognize
* @param maxResults the maximum number of results to return in the response
* NOTE: Sample rate of file must be 8000 unless a custom sample rate is specified.
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForWave(String waveFile, int maxResults) throws IOException {
return getRecognizedDataForWave(new File(waveFile), maxResults);
}
/**
* Get recognized data from a FLAC file.
*
* @param flacFile FLAC file to recognize
* @param maxResults the maximum number of results to return in the response
* NOTE: Sample rate of file must be 8000 unless a custom sample rate is specified.
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(File flacFile, int maxResults) throws IOException {
return getRecognizedDataForFlac(flacFile, maxResults, 8000);
}
/**
* Get recognized data from a FLAC file.
*
* @param flacFile FLAC file to recognize
* @param maxResults the maximum number of results to return in the response
* @param samepleRate The sampleRate of the file. Default is 8000.
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(File flacFile, int maxResults, int sampleRate) throws IOException{
String response = rawRequest(flacFile, maxResults, sampleRate);
GoogleResponse googleResponse = new GoogleResponse();
parseResponse(response, googleResponse);
return googleResponse;
}
/**
* Get recognized data from a FLAC file.
*
* @param flacFile FLAC file to recognize
* @param maxResults the maximum number of results to return in the response
* @param samepleRate The sampleRate of the file. Default is 8000.
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(String flacFile, int maxResults, int sampleRate) throws IOException{
return getRecognizedDataForFlac(new File(flacFile), maxResults, sampleRate);
}
/**
* Get recognized data from a FLAC file.
*
* @param flacFile FLAC file to recognize
* @param maxResults the maximum number of results to return in the response
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(String flacFile, int maxResults) throws IOException {
return getRecognizedDataForFlac(new File(flacFile), maxResults);
}
/**
* Get recognized data from a Wave file. This method will encode the wave file to a FLAC.
* This method will automatically set the language to en-US, or English
*
* @param waveFile Wave file to recognize
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForWave(File waveFile) throws IOException {
return getRecognizedDataForWave(waveFile, 1);
}
/**
* Get recognized data from a Wave file. This method will encode the wave file to a FLAC.
* This method will automatically set the language to en-US, or English
*
* @param waveFile Wave file to recognize
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForWave(String waveFile) throws IOException {
return getRecognizedDataForWave(waveFile, 1);
}
/**
* Get recognized data from a FLAC file.
* This method will automatically set the language to en-US, or English
*
* @param flacFile FLAC file to recognize
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(File flacFile) throws IOException {
return getRecognizedDataForFlac(flacFile, 1);
}
/**
* Get recognized data from a FLAC file.
* This method will automatically set the language to en-US, or English
*
* @param flacFile FLAC file to recognize
* @return Returns a GoogleResponse, with the response and confidence score
* @throws IOException Throws exception if something goes wrong
*/
public GoogleResponse getRecognizedDataForFlac(String flacFile) throws IOException {
return getRecognizedDataForFlac(flacFile, 1);
}
/**
* Parses the raw response from Google
*
* @param rawResponse The raw, unparsed response from Google
* @return Returns the parsed response in the form of a Google Response.
*/
private void parseResponse(String rawResponse, GoogleResponse googleResponse) {
if (rawResponse == null || !rawResponse.contains("utterance"))
return;
String array = StringUtil.substringBetween(rawResponse, "[", "]");
String[] parts = array.split("}");
boolean first = true;
for( String s : parts ) {
if( first ) {
first = false;
String utterancePart = s.split(",")[0];
String confidencePart = s.split(",")[1];
String utterance = utterancePart.split(":")[1];
String confidence = confidencePart.split(":")[1];
utterance = StringUtil.stripQuotes(utterance);
confidence = StringUtil.stripQuotes(confidence);
if( utterance.equals("null") ) {
utterance = null;
}
if( confidence.equals("null") ) {
confidence = null;
}
googleResponse.setResponse(utterance);
googleResponse.setConfidence(confidence);
} else {
String utterance = s.split(":")[1];
utterance = StringUtil.stripQuotes(utterance);
if( utterance.equals("null") ) {
utterance = null;
}
googleResponse.getOtherPossibleResponses().add(utterance);
}
}
}
/**
* Performs the request to Google with a file <br>
* Request is buffered
*
* @param inputFile Input files to recognize
* @return Returns the raw, unparsed response from Google
* @throws IOException Throws exception if something went wrong
*/
private String rawRequest(File inputFile, int maxResults, int sampleRate) throws IOException{
URL url;
URLConnection urlConn;
OutputStream outputStream;
BufferedReader br;
StringBuilder sb = new StringBuilder(GOOGLE_RECOGNIZER_URL);
if( language != null ) {
sb.append("&lang=");
sb.append(language);
}
else{
sb.append("&lang=auto");
}
if( !profanityFilter ) {
sb.append("&pfilter=0");
}
sb.append("&maxresults=");
sb.append(maxResults);
// URL of Remote Script.
url = new URL(sb.toString());
// Open New URL connection channel.
urlConn = url.openConnection();
// we want to do output.
urlConn.setDoOutput(true);
// No caching
urlConn.setUseCaches(false);
// Specify the header content type.
urlConn.setRequestProperty("Content-Type", "audio/x-flac; rate=" + sampleRate);
// Send POST output.
outputStream = urlConn.getOutputStream();
FileInputStream fileInputStream = new FileInputStream(inputFile);
byte[] buffer = new byte[256];
while ((fileInputStream.read(buffer, 0, 256)) != -1) {
outputStream.write(buffer, 0, 256);
}
fileInputStream.close();
outputStream.close();
// Get response data.
br = new BufferedReader(new InputStreamReader(urlConn.getInputStream(), Charset.forName("UTF-8")));
String response = br.readLine();
br.close();
return response;
}
}

View file

@ -1,282 +0,0 @@
package com.darkprograms.speech.recognizer;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.net.URLConnection;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.util.ArrayList;
import java.util.List;
import javax.net.ssl.HttpsURLConnection;
import javax.xml.ws.http.HTTPException;
import com.darkprograms.speech.util.StringUtil;
/**
* This class uses Google's V2 Hook. The class is returns a chunked respones so listeners must be used.
* The class also requires an API-Key (see Constructor) for details. This class is experimental and
* subject to change as we restructure the API.
* @author Aaron Gokaslan (Skylion)
*/
public class RecognizerChunked {
/**
* Google's API V2 URL
*/
private static final String GOOGLE_SPEECH_URL_V2 = "https://www.google.com/speech-api/v2/recognize";
/**
* API-Key used for requests
*/
private final String API_KEY;
/**
* The language code Google uses to determine the language
* Default value is "auto"
*/
private String language;
/**
* Stores the Response Listeners
*/
private List<GSpeechResponseListener> responseListeners = new ArrayList<GSpeechResponseListener>();
/**
* Constructor
* @param API_KEY The API-Key for Google's Speech API. An API key can be obtained by requesting
* one by following the process shown at this
* <a href="http://www.chromium.org/developers/how-tos/api-keys">url</a>.
*/
public RecognizerChunked(String API_KEY){
this.API_KEY = API_KEY;
this.language = "auto";
}
/**
* Constructor
* @param API_KEY The API-Key for Google's Speech API. An API key can be obtained by requesting
* one by following the process shown at this
* <a href="http://www.chromium.org/developers/how-tos/api-keys">url</a>.
* @param language The language you want to use (Iso code)
* Note: This function will most likely be deprecated.
*/
public RecognizerChunked(String API_KEY, String language){
this(API_KEY);
this.language = language;
}
/**
* The current language the Recognizer is set to use. Returns the ISO-Code otherwise,
* it may return "auto."
* @return The ISO-Code or auto if the language the is not specified.
*/
public String getLanguage(){
return language;
}
/**
* Sets the language that the file should return.
* @param language The language as an ISO-Code
*/
public void setLanguage(String language){
this.language = language;
}
/**
* Analyzes the file for speech
* @param infile The file you want to analyze for speech.
* @param sampleRate The sample rate of the audioFile.
* @throws IOException if something goes wrong reading the file.
*/
public void getRecognizedDataForFlac(File infile, int sampleRate) throws IOException{
byte[] data = mapFileIn(infile);
getRecognizedDataForFlac(data, sampleRate);
}
/**
* Analyzes the file for speech
* @param infile The file you want to analyze for speech.
* @param sampleRate The sample rate of the audioFile.
* @throws IOException if something goes wrong reading the file.
*/
public void getRecognizedDataForFlac(String inFile, int sampleRate) throws IOException{
getRecognizedDataForFlac(new File(inFile), sampleRate);
}
/**
* Recognizes the byte data.
* @param data
* @param sampleRate
*/
public void getRecognizedDataForFlac(byte[] data, int sampleRate){
StringBuilder sb = new StringBuilder(GOOGLE_SPEECH_URL_V2);
sb.append("?output=json");
sb.append("&client=chromium");
sb.append("&lang=" + language);
sb.append("&key=" + API_KEY);
String url = sb.toString();
openHttpsPostConnection(url, data, sampleRate);
}
/**
* Opens a chunked response HTTPS line to the specified URL
* @param urlStr The URL string to connect for chunking
* @param data The data you want to send to Google. Speech files under 15 seconds long recommended.
* @param sampleRate The sample rate for your audio file.
*/
private void openHttpsPostConnection(final String urlStr, final byte[] data, final int sampleRate) {
new Thread () {
public void run() {
HttpsURLConnection httpConn = null;
ByteBuffer buff = ByteBuffer.wrap(data);
byte[] destdata = new byte[2048];
int resCode = -1;
OutputStream out = null;
try {
URL url = new URL(urlStr);
URLConnection urlConn = url.openConnection();
if (!(urlConn instanceof HttpsURLConnection)) {
throw new IOException ("URL must be HTTPS");
}
httpConn = (HttpsURLConnection)urlConn;
httpConn.setAllowUserInteraction(false);
httpConn.setInstanceFollowRedirects(true);
httpConn.setRequestMethod("POST");
httpConn.setDoOutput(true);
httpConn.setChunkedStreamingMode(0); //TransferType: chunked
httpConn.setRequestProperty("Content-Type", "audio/x-flac; rate=" + sampleRate);
// this opens a connection, then sends POST & headers.
out = httpConn.getOutputStream();
//beyond 15 sec duration just simply writing the file
// does not seem to work. So buffer it and delay to simulate
// bufferd microphone delivering stream of speech
// re: net.http.ChunkedOutputStream.java
while(buff.remaining() >= destdata.length){
buff.get(destdata);
out.write(destdata);
};
byte[] lastr = new byte[buff.remaining()];
buff.get(lastr, 0, lastr.length);
out.write(lastr);
out.close();
resCode = httpConn.getResponseCode();
if(resCode >= HttpURLConnection.HTTP_UNAUTHORIZED){//Stops here if Google doesn't like us/
throw new HTTPException(HttpURLConnection.HTTP_UNAUTHORIZED);//Throws
}
String line;//Each line that is read back from Google.
BufferedReader br = new BufferedReader(new InputStreamReader(httpConn.getInputStream()));
while ((line = br.readLine( )) != null) {
if(line.length()>19 && resCode > 100 && resCode < HttpURLConnection.HTTP_UNAUTHORIZED){
GoogleResponse gr = new GoogleResponse();
parseResponse(line, gr);
fireResponseEvent(gr);
}
}
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
finally {httpConn.disconnect();}
}
}.start();
}
/**
* Converts the file into a byte[].
* @param infile The File you want to specify
* @return a byte array
* @throws IOException if something goes wrong reading the file.
*/
private byte[] mapFileIn(File infile) throws IOException{
FileInputStream fis = new FileInputStream(infile);
try{
FileChannel fc = fis.getChannel(); // Get the file's size and then map it into memory
int sz = (int)fc.size();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0, sz);
byte[] data2 = new byte[bb.remaining()];
bb.get(data2);
return data2;
}
finally{//Ensures resources are closed regardless of whether the action suceeded
fis.close();
}
}
/**
* Parses the response into a Google Response
* @param rawResponse The raw String you want to parse
* @param gr The GoogleResponse you want to parse into ti.
*/
private void parseResponse(String rawResponse, GoogleResponse gr){
if(rawResponse == null || !rawResponse.contains("\"result\"")){ return; }
if(rawResponse.contains("\"confidence\":")){
String confidence = StringUtil.substringBetween(rawResponse, "\"confidence\":", "}");
gr.setConfidence(confidence);
}
else{
gr.setConfidence(String.valueOf(1d));
}
String array = StringUtil.trimString(rawResponse, "[", "]");
if(array.contains("[")){
array = StringUtil.trimString(array, "[", "]");
}
String[] parts = array.split(",");
gr.setResponse(parseTranscript(parts[0]));
for(int i = 1; i<parts.length; i++){
gr.getOtherPossibleResponses().add(parseTranscript(parts[i]));
}
}
/**
* Cleans up the transcript portion of the String
* @param s The string you want to process.
* @return The reformated string.
*/
private String parseTranscript(String s){
String tmp = s.substring(s.indexOf(":")+1);
if(s.endsWith("}")){
tmp = tmp.substring(0, tmp.length()-1);
}
tmp = StringUtil.stripQuotes(tmp);
return tmp;
}
/**
* Adds responseListener that triggers when a response from Google is recieved
* @param rl The response listener you want to add
*/
public synchronized void addResponseListener(GSpeechResponseListener rl){
responseListeners.add(rl);
}
/**
* Removes the specified response listener
* @param rl The response listener
*/
public synchronized void removeResponseListener(GSpeechResponseListener rl){
responseListeners.remove(rl);
}
/**
* Fires the response listener
* @param gr The GoogleResponse as the event object.
*/
private synchronized void fireResponseEvent(GoogleResponse gr){
for(GSpeechResponseListener gl: responseListeners){
gl.onResponse(gr);
}
}
}

View file

@ -1,261 +0,0 @@
package com.darkprograms.speech.synthesiser;
import java.io.IOException;
import java.io.InputStream;
import java.io.SequenceInputStream;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLEncoder;
import java.util.ArrayList;
import java.util.Collections;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import com.darkprograms.speech.translator.GoogleTranslate;
/*******************************************************************************
* Synthesiser class that connects to Google's unoffical API to retrieve data
*
* @author Luke Kuza, Aaron Gokaslan (Skylion)
*******************************************************************************/
public class Synthesiser {
/**
* URL to query for Google synthesiser
*/
private final static String GOOGLE_SYNTHESISER_URL = "http://translate.google.com/translate_tts?tl=";
/**
* language of the Text you want to translate
*/
private String languageCode;
/**
* LANG_XX_XXXX Variables are language codes.
*/
public static final String LANG_AU_ENGLISH = "en-AU";
public static final String LANG_US_ENGLISH = "en-US";
public static final String LANG_UK_ENGLISH = "en-GB";
public static final String LANG_ES_SPANISH = "es";
public static final String LANG_FR_FRENCH = "fr";
public static final String LANG_DE_GERMAN = "de";
public static final String LANG_PT_PORTUGUESE = "pt-pt";
public static final String LANG_PT_BRAZILIAN = "pt-br";
//Please add on more regional languages as you find them. Also try to include the accent code if you can can.
/**
* Constructor
*/
public Synthesiser() {
languageCode = "auto";
}
/**
* Constructor that takes language code parameter. Specify to "auto" for language autoDetection
*/
public Synthesiser(String languageCode){
this.languageCode = languageCode;
}
/**
* Returns the current language code for the Synthesiser.
* Example: English(Generic) = en, English (US) = en-US, English (UK) = en-GB. and Spanish = es;
* @return the current language code parameter
*/
public String getLanguage(){
return languageCode;
}
/**
* Note: set language to auto to enable automatic language detection.
* Setting to null will also implement Google's automatic language detection
* @param languageCode The language code you would like to modify languageCode to.
*/
public void setLanguage(String languageCode){
this.languageCode = languageCode;
}
/**
* Gets an input stream to MP3 data for the returned information from a request
*
* @param synthText Text you want to be synthesized into MP3 data
* @return Returns an input stream of the MP3 data that is returned from Google
* @throws IOException Throws exception if it can not complete the request
*/
public InputStream getMP3Data(String synthText) throws IOException{
String languageCode = this.languageCode;//Ensures retention of language settings if set to auto
if(languageCode == null || languageCode.equals("") || languageCode.equalsIgnoreCase("auto")){
try{
languageCode = detectLanguage(synthText);//Detects language
if(languageCode == null){
languageCode = "en-us";//Reverts to Default Language if it can't detect it.
}
}
catch(Exception ex){
ex.printStackTrace();
languageCode = "en-us";//Reverts to Default Language if it can't detect it.
}
}
if(synthText.length()>100){
List<String> fragments = parseString(synthText);//parses String if too long
String tmp = getLanguage();
setLanguage(languageCode);//Keeps it from autodetecting each fragment.
InputStream out = getMP3Data(fragments);
setLanguage(tmp);//Reverts it to it's previous Language such as auto.
return out;
}
String encoded = URLEncoder.encode(synthText, "UTF-8"); //Encode
URL url = new URL(GOOGLE_SYNTHESISER_URL + languageCode + "&q=" + encoded); //create url
// Open New URL connection channel.
URLConnection urlConn = url.openConnection(); //Open connection
urlConn.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0"); //Adding header for user agent is required
return urlConn.getInputStream();
}
/**
* Gets an InputStream to MP3Data for the returned information from a request
* @param synthText List of Strings you want to be synthesized into MP3 data
* @return Returns an input stream of all the MP3 data that is returned from Google
* @throws IOException Throws exception if it cannot complete the request
*/
public InputStream getMP3Data(List<String> synthText) throws IOException{
//Uses an executor service pool for concurrency. Limit to 1000 threads max.
ExecutorService pool = Executors.newFixedThreadPool(1000);
//Stores the Future (Data that will be returned in the future)
Set<Future<InputStream>> set = new LinkedHashSet<Future<InputStream>>(synthText.size());
for(String part: synthText){ //Iterates through the list
Callable<InputStream> callable = new MP3DataFetcher(part);//Creates Callable
Future<InputStream> future = pool.submit(callable);//Begins to run Callable
set.add(future);//Adds the response that will be returned to a set.
}
List<InputStream> inputStreams = new ArrayList<InputStream>(set.size());
for(Future<InputStream> future: set){
try {
inputStreams.add(future.get());//Gets the returned data from the future.
} catch (ExecutionException e) {//Thrown if the MP3DataFetcher encountered an error.
Throwable ex = e.getCause();
if(ex instanceof IOException){
throw (IOException)ex;//Downcasts and rethrows it.
}
} catch (InterruptedException e){//Will probably never be called, but just in case...
Thread.currentThread().interrupt();//Interrupts the thread since something went wrong.
}
}
return new SequenceInputStream(Collections.enumeration(inputStreams));//Sequences the stream.
}
/**
* Separates a string into smaller parts so that Google will not reject the request.
* @param input The string you want to separate
* @return A List<String> of the String fragments from your input..
*/
private List<String> parseString(String input){
return parseString (input, new ArrayList<String>());
}
/**
* Separates a string into smaller parts so that Google will not reject the request.
* @param input The string you want to break up into smaller parts
* @param fragments List<String> that you want to add stuff too.
* If you don't have a List<String> already constructed "new ArrayList<String>()" works well.
* @return A list of the fragments of the original String
*/
private List<String> parseString(String input, List<String> fragments){
if(input.length()<=100){//Base Case
fragments.add(input);
return fragments;
}
else{
int lastWord = findLastWord(input);//Checks if a space exists
if(lastWord<=0){
fragments.add(input.substring(0,100));//In case you sent gibberish to Google.
return parseString(input.substring(100), fragments);
}else{
fragments.add(input.substring(0,lastWord));//Otherwise, adds the last word to the list for recursion.
return parseString(input.substring(lastWord), fragments);
}
}
}
/**
* Finds the last word in your String (before the index of 99) by searching for spaces and ending punctuation.
* Will preferably parse on punctuation to alleviate mid-sentence pausing
* @param input The String you want to search through.
* @return The index of where the last word of the string ends before the index of 99.
*/
private int findLastWord(String input){
if(input.length()<100)
return input.length();
int space = -1;
for(int i = 99; i>0; i--){
char tmp = input.charAt(i);
if(isEndingPunctuation(tmp)){
return i+1;
}
if(space==-1 && tmp == ' '){
space = i;
}
}
if(space>0){
return space;
}
return -1;
}
/**
* Checks if char is an ending character
* Ending punctuation for all languages according to Wikipedia (Except for Sanskrit non-unicode)
* @param The char you want check
* @return True if it is, false if not.
*/
private boolean isEndingPunctuation(char input){
return input == '.' || input == '!' || input == '?' || input == ';' || input == ':' || input == '|';
}
/**
* Automatically determines the language of the original text
* @param text represents the text you want to check the language of
* @return the languageCode in ISO-639
* @throws Exception if it cannot complete the request
*/
public String detectLanguage(String text) throws IOException{
return GoogleTranslate.detectLanguage(text);
}
/**
* This class is a callable.
* A callable is like a runnable except that it can return data and throw exceptions.
* Useful when using futures. Dramatically improves the speed of execution.
* @author Aaron Gokaslan (Skylion)
*/
private class MP3DataFetcher implements Callable<InputStream>{
private String synthText;
public MP3DataFetcher(String synthText){
this.synthText = synthText;
}
public InputStream call() throws IOException{
return getMP3Data(synthText);
}
}
}

View file

@ -1,303 +0,0 @@
package com.darkprograms.speech.synthesiser;
import java.io.IOException;
import java.io.InputStream;
import java.io.SequenceInputStream;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLEncoder;
import java.util.ArrayList;
import java.util.Collections;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import com.darkprograms.speech.translator.GoogleTranslate;
/**
* This class uses the V2 version of Google's Text to Speech API. While this class requires an API key,
* the endpoint allows for additional specification of parameters including speed and pitch.
* See the constructor for instructions regarding the API_Key.
* @author Skylion (Aaron Gokaslan)
*/
public class SynthesiserV2 {
private static final String GOOGLE_SYNTHESISER_URL = "https://www.google.com/speech-api/v2/synthesize?enc=mpeg" +
"&client=chromium";
/**
* API_KEY used for requests
*/
private final String API_KEY;
/**
* language of the Text you want to translate
*/
private String languageCode;
/**
* The pitch of the generated audio
*/
private double pitch = 1.0;
/**
* The speed of the generated audio
*/
private double speed = 1.0;
/**
* Constructor
* @param API_KEY The API-Key for Google's Speech API. An API key can be obtained by requesting
* one by following the process shown at this
* <a href="http://www.chromium.org/developers/how-tos/api-keys">url</a>.
*/
public SynthesiserV2(String API_KEY){
this.API_KEY = API_KEY;
}
/**
* Returns the current language code for the Synthesiser.
* Example: English(Generic) = en, English (US) = en-US, English (UK) = en-GB. and Spanish = es;
* @return the current language code parameter
*/
public String getLanguage(){
return languageCode;
}
/**
* Note: set language to auto to enable automatic language detection.
* Setting to null will also implement Google's automatic language detection
* @param languageCode The language code you would like to modify languageCode to.
*/
public void setLanguage(String languageCode){
this.languageCode = languageCode;
}
/**
* @return the pitch
*/
public double getPitch() {
return pitch;
}
/**
* Sets the pitch of the audio.
* Valid values range from 0 to 2 inclusive.
* Values above 1 correspond to higher pitch, values below 1 correspond to lower pitch.
* @param pitch the pitch to set
*/
public void setPitch(double pitch) {
this.pitch = pitch;
}
/**
* @return the speed
*/
public double getSpeed() {
return speed;
}
/**
* Sets the speed of audio.
* Valid values range from 0 to 2 inclusive.
* Values higher than one correspond to faster and vice versa.
* @param speed the speed to set
*/
public void setSpeed(double speed) {
this.speed = speed;
}
/**
* Gets an input stream to MP3 data for the returned information from a request
*
* @param synthText Text you want to be synthesized into MP3 data
* @return Returns an input stream of the MP3 data that is returned from Google
* @throws IOException Throws exception if it can not complete the request
*/
public InputStream getMP3Data(String synthText) throws IOException{
String languageCode = this.languageCode;//Ensures retention of language settings if set to auto
if(languageCode == null || languageCode.equals("") || languageCode.equalsIgnoreCase("auto")){
try{
languageCode = detectLanguage(synthText);//Detects language
if(languageCode == null){
languageCode = "en-us";//Reverts to Default Language if it can't detect it.
}
}
catch(Exception ex){
ex.printStackTrace();
languageCode = "en-us";//Reverts to Default Language if it can't detect it.
}
}
if(synthText.length()>100){
List<String> fragments = parseString(synthText);//parses String if too long
String tmp = getLanguage();
setLanguage(languageCode);//Keeps it from autodetecting each fragment.
InputStream out = getMP3Data(fragments);
setLanguage(tmp);//Reverts it to it's previous Language such as auto.
return out;
}
String encoded = URLEncoder.encode(synthText, "UTF-8"); //Encode
StringBuilder sb = new StringBuilder(GOOGLE_SYNTHESISER_URL);
sb.append("&key=" + API_KEY);
sb.append("&text=" + encoded);
sb.append("&lang=" + languageCode);
if(speed>=0 && speed<=2.0){
sb.append("&speed=" + speed/2.0);
}
if(pitch>=0 && pitch<=2.0){
sb.append("&pitch=" + pitch/2.0);
}
URL url = new URL(sb.toString()); //create url
// Open New URL connection channel.
URLConnection urlConn = url.openConnection(); //Open connection
urlConn.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0"); //Adding header for user agent is required
return urlConn.getInputStream();
}
/**
* Gets an InputStream to MP3Data for the returned information from a request
* @param synthText List of Strings you want to be synthesized into MP3 data
* @return Returns an input stream of all the MP3 data that is returned from Google
* @throws IOException Throws exception if it cannot complete the request
*/
public InputStream getMP3Data(List<String> synthText) throws IOException{
//Uses an executor service pool for concurrency. Limit to 1000 threads max.
ExecutorService pool = Executors.newFixedThreadPool(1000);
//Stores the Future (Data that will be returned in the future)
Set<Future<InputStream>> set = new LinkedHashSet<Future<InputStream>>(synthText.size());
for(String part: synthText){ //Iterates through the list
Callable<InputStream> callable = new MP3DataFetcher(part);//Creates Callable
Future<InputStream> future = pool.submit(callable);//Begins to run Callable
set.add(future);//Adds the response that will be returned to a set.
}
List<InputStream> inputStreams = new ArrayList<InputStream>(set.size());
for(Future<InputStream> future: set){
try {
inputStreams.add(future.get());//Gets the returned data from the future.
} catch (ExecutionException e) {//Thrown if the MP3DataFetcher encountered an error.
Throwable ex = e.getCause();
if(ex instanceof IOException){
throw (IOException)ex;//Downcasts and rethrows it.
}
} catch (InterruptedException e){//Will probably never be called, but just in case...
Thread.currentThread().interrupt();//Interrupts the thread since something went wrong.
}
}
return new SequenceInputStream(Collections.enumeration(inputStreams));//Sequences the stream.
}
/**
* Separates a string into smaller parts so that Google will not reject the request.
* @param input The string you want to separate
* @return A List<String> of the String fragments from your input..
*/
private List<String> parseString(String input){
return parseString (input, new ArrayList<String>());
}
/**
* Separates a string into smaller parts so that Google will not reject the request.
* @param input The string you want to break up into smaller parts
* @param fragments List<String> that you want to add stuff too.
* If you don't have a List<String> already constructed "new ArrayList<String>()" works well.
* @return A list of the fragments of the original String
*/
private List<String> parseString(String input, List<String> fragments){
if(input.length()<=100){//Base Case
fragments.add(input);
return fragments;
}
else{
int lastWord = findLastWord(input);//Checks if a space exists
if(lastWord<=0){
fragments.add(input.substring(0,100));//In case you sent gibberish to Google.
return parseString(input.substring(100), fragments);
}else{
fragments.add(input.substring(0,lastWord));//Otherwise, adds the last word to the list for recursion.
return parseString(input.substring(lastWord), fragments);
}
}
}
/**
* Finds the last word in your String (before the index of 99) by searching for spaces and ending punctuation.
* Will preferably parse on punctuation to alleviate mid-sentence pausing
* @param input The String you want to search through.
* @return The index of where the last word of the string ends before the index of 99.
*/
private int findLastWord(String input){
if(input.length()<100)
return input.length();
int space = -1;
for(int i = 99; i>0; i--){
char tmp = input.charAt(i);
if(isEndingPunctuation(tmp)){
return i+1;
}
if(space==-1 && tmp == ' '){
space = i;
}
}
if(space>0){
return space;
}
return -1;
}
/**
* Checks if char is an ending character
* Ending punctuation for all languages according to Wikipedia (Except for Sanskrit non-unicode)
* @param The char you want check
* @return True if it is, false if not.
*/
private boolean isEndingPunctuation(char input){
return input == '.' || input == '!' || input == '?' || input == ';' || input == ':' || input == '|';
}
/**
* Automatically determines the language of the original text
* @param text represents the text you want to check the language of
* @return the languageCode in ISO-639
* @throws Exception if it cannot complete the request
*/
public String detectLanguage(String text) throws IOException{
return GoogleTranslate.detectLanguage(text);
}
/**
* This class is a callable.
* A callable is like a runnable except that it can return data and throw exceptions.
* Useful when using futures. Dramatically improves the speed of execution.
* @author Aaron Gokaslan (Skylion)
*/
private class MP3DataFetcher implements Callable<InputStream>{
private String synthText;
public MP3DataFetcher(String synthText){
this.synthText = synthText;
}
public InputStream call() throws IOException{
return getMP3Data(synthText);
}
}
}

View file

@ -1,168 +0,0 @@
package com.darkprograms.speech.translator;
import java.io.IOException;
import java.io.Reader;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLEncoder;
import java.nio.charset.Charset;
import java.util.Locale;
/***************************************************************************************************************
* An API for a Google Translation service in Java.
* Please Note: This API is unofficial and is not supported by Google. Subject to breakage at any time.
* The translator allows for language detection and translation.
* Recommended for translation of user interfaces or speech commands.
* All translation services provided via Google Translate
* @author Aaron Gokaslan (Skylion)
***************************************************************************************************************/
public final class GoogleTranslate { //Class marked as final since all methods are static
/**
* URL to query for Translation
*/
private final static String GOOGLE_TRANSLATE_URL = "http://translate.google.com/translate_a/t?client=t";
/**
* Private to prevent instantiation
*/
private GoogleTranslate(){};
/**
* Converts the ISO-639 code into a friendly language code in the user's default language
* For example, if the language is English and the default locale is French, it will return "anglais"
* Useful for UI Strings
* @param languageCode The ISO639-1
* @return The language in the user's default language
* @see {@link #detectLanguage}
*/
public static String getDisplayLanguage(String languageCode){
return (new Locale(languageCode)).getDisplayLanguage();
}
/**
* Automatically determines the language of the original text
* @param text represents the text you want to check the language of
* @return The ISO-639 code for the language
* @throws IOException if it cannot complete the request
*/
public static String detectLanguage(String text) throws IOException{
String encoded = URLEncoder.encode(text, "UTF-8"); //Encodes the string
URL url = new URL(GOOGLE_TRANSLATE_URL + "&text=" + encoded); //Generates URL
String rawData = urlToText(url);//Gets text from Google
return findLanguage(rawData);
}
/**
* Automatically translates text to a system's default language according to its locale
* Useful for creating international applications as you can translate UI strings
* @param text The text you want to translate
* @return The translated text
* @throws IOException if cannot complete request
*/
public static String translate(String text) throws IOException{
return translate(Locale.getDefault().getLanguage(), text);
}
/**
* Automatically detects language and translate to the targetLanguage
* @param targetLanguage The language you want to translate into in ISO-639 format
* @param text The text you actually want to translate
* @return The translated text.
* @throws IOException if it cannot complete the request
*/
public static String translate(String targetLanguage, String text) throws IOException{
return translate("auto",targetLanguage, text);
}
/**
* Translate text from sourceLanguage to targetLanguage
* Specifying the sourceLanguage greatly improves accuracy over short Strings
* @param sourceLanguage The language you want to translate from in ISO-639 format
* @param targetLanguage The language you want to translate into in ISO-639 format
* @param text The text you actually want to translate
* @return the translated text.
* @throws IOException if it cannot complete the request
*/
public static String translate(String sourceLanguage, String targetLanguage, String text) throws IOException{
String encoded = URLEncoder.encode(text, "UTF-8"); //Encode
//Generates URL
URL url = new URL(GOOGLE_TRANSLATE_URL + "&sl=" + sourceLanguage + "&tl=" + targetLanguage + "&text=" + encoded);
String rawData = urlToText(url);//Gets text from Google
if(rawData==null){
return null;
}
String[] raw = rawData.split("\"");//Parses the JSON
if(raw.length<2){
return null;
}
return raw[1];//Returns the translation
}
/**
* Converts a URL to Text
* @param url that you want to generate a String from
* @return The generated String
* @throws IOException if it cannot complete the request
*/
private static String urlToText(URL url) throws IOException{
URLConnection urlConn = url.openConnection(); //Open connection
//Adding header for user agent is required. Otherwise, Google rejects the request
urlConn.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0");
Reader r = new java.io.InputStreamReader(urlConn.getInputStream(), Charset.forName("UTF-8"));//Gets Data Converts to string
StringBuilder buf = new StringBuilder();
while (true) {//Reads String from buffer
int ch = r.read();
if (ch < 0)
break;
buf.append((char) ch);
}
String str = buf.toString();
return str;
}
/**
* Searches RAWData for Language
* @param RAWData the raw String directly from Google you want to search through
* @return The language parsed from the rawData or en-US (English-United States) if Google cannot determine it.
*/
private static String findLanguage(String rawData){
for(int i = 0; i+5<rawData.length(); i++){
boolean dashDetected = rawData.charAt(i+4)=='-';
if(rawData.charAt(i)==',' && rawData.charAt(i+1)== '"'
&& ((rawData.charAt(i+4)=='"' && rawData.charAt(i+5)==',')
|| dashDetected)){
if(dashDetected){
int lastQuote = rawData.substring(i+2).indexOf('"');
if(lastQuote>0)
return rawData.substring(i+2,i+2+lastQuote);
}
else{
String possible = rawData.substring(i+2,i+4);
if(containsLettersOnly(possible)){//Required due to Google's inconsistent formatting.
return possible;
}
}
}
}
return null;
}
/**
* Checks if all characters in text are letters.
* @param text The text you want to determine the validity of.
* @return True if all characters are letter, otherwise false.
*/
private static boolean containsLettersOnly(String text){
for(int i = 0; i<text.length(); i++){
if(!Character.isLetter(text.charAt(i))){
return false;
}
}
return true;
}
}

View file

@ -1,190 +0,0 @@
package com.darkprograms.speech.util;
//TODO Replace this class with something that isn't 20 years old.
//ChunkedOutputStream - an OutputStream that implements HTTP/1.1 chunking
//
//Copyright (C) 1996 by Jef Poskanzer <jef@acme.com>. All rights reserved.
//
//Redistribution and use in source and binary forms, with or without
//modification, are permitted provided that the following conditions
//are met:
//1. Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
//2. Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
//THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
//ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
//IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
//ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
//FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
//DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
//OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
//HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
//LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
//OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
//SUCH DAMAGE.
//
//Visit the ACME Labs Java page for up-to-date versions of this and other
//fine Java utilities: http://www.acme.com/java/
import java.io.*;
import java.util.*;
/// An OutputStream that implements HTTP/1.1 chunking.
//<P>
//This class lets a Servlet send its response data as an HTTP/1.1 chunked
//stream. Chunked streams are a way to send arbitrary-length data without
//having to know beforehand how much you're going to send. They are
//introduced by a "Transfer-Encoding: chunked" header, so you have to
//set that header when you make one of these streams.
//<P>
//Sample usage:
//<BLOCKQUOTE><PRE><CODE>
//res.setHeader( "Transfer-Encoding", "chunked" );
//OutputStream out = res.getOutputStream();
//ChunkedOutputStream chunkOut = new ChunkedOutputStream( out );
//(write data to chunkOut instead of out)
//(optionally set footers)
//chunkOut.done();
//</CODE></PRE></BLOCKQUOTE>
//<P>
//Every time the stream gets flushed, a chunk is sent. When done()
//is called, an empty chunk is sent, marking the end of the chunked
//stream as per the chunking spec.
//<P>
//<A HREF="/resources/classes/Acme/Serve/servlet/http/ChunkedOutputStream.java">Fetch the software.</A><BR>
//<A HREF="/resources/classes/Acme.tar.Z">Fetch the entire Acme package.</A>
public class ChunkedOutputStream extends BufferedOutputStream
{
/// Make a ChunkedOutputStream with a default buffer size.
// @param out the underlying output stream
public ChunkedOutputStream( OutputStream out )
{
super( out );
}
/// Make a ChunkedOutputStream with a specified buffer size.
// @param out the underlying output stream
// @param size the buffer size
public ChunkedOutputStream( OutputStream out, int size )
{
super( out, size );
}
/// Flush the stream. This will write any buffered output
// bytes as a chunk.
// @exception IOException if an I/O error occurred
public synchronized void flush() throws IOException
{
if ( count != 0 )
{
writeBuf( buf, 0, count );
count = 0;
}
}
private Vector footerNames = new Vector();
private Vector footerValues = new Vector();
/// Set a footer. Footers are much like HTTP headers, except that
// they come at the end of the data instead of at the beginning.
public void setFooter( String name, String value )
{
footerNames.addElement( name );
footerValues.addElement( value );
}
/// Indicate the end of the chunked data by sending a zero-length chunk,
// possible including footers.
// @exception IOException if an I/O error occurred
public void done() throws IOException
{
flush();
PrintStream pout = new PrintStream( out );
pout.println( "0" );
if ( footerNames.size() > 0 )
{
// Send footers.
for ( int i = 0; i < footerNames.size(); ++i )
{
String name = (String) footerNames.elementAt( i );
String value = (String) footerValues.elementAt( i );
pout.println( name + ": " + value );
}
}
footerNames = null;
footerValues = null;
pout.println( "" );
pout.flush();
}
/// Make sure that calling close() terminates the chunked stream.
public void close() throws IOException
{
if ( footerNames != null )
done();
super.close();
}
/// Write a sub-array of bytes.
// <P>
// The only reason we have to override the BufferedOutputStream version
// of this is that it writes the array directly to the output stream
// if doesn't fit in the buffer. So we make it use our own chunk-write
// routine instead. Otherwise this is identical to the parent-class
// version.
// @param b the data to be written
// @param off the start offset in the data
// @param len the number of bytes that are written
// @exception IOException if an I/O error occurred
public synchronized void write( byte b[], int off, int len ) throws IOException
{
int avail = buf.length - count;
if ( len <= avail )
{
System.arraycopy( b, off, buf, count, len );
count += len;
return;
}
flush();
writeBuf( b, off, len );
}
private static final byte[] crlf = { 13, 10 };
private byte[] lenBytes = new byte[20]; // big enough for any number in hex
/// The only routine that actually writes to the output stream.
// This is where chunking semantics are implemented.
// @exception IOException if an I/O error occurred
private void writeBuf( byte b[], int off, int len ) throws IOException
{
// Write the chunk length as a hex number.
String lenStr = Integer.toString( len, 16 );
lenStr.getBytes( 0, lenStr.length(), lenBytes, 0 );
out.write( lenBytes );
// Write a CRLF.
out.write( crlf );
// Write the data.
if ( len != 0 )
out.write( b, off, len );
// Write a CRLF.
out.write( crlf );
// And flush the real stream.
out.flush();
}
}

View file

@ -1,120 +0,0 @@
package com.darkprograms.speech.util;
/*************************************************************************
* Compilation: javac Complex.java
* Execution: java Complex
*
* Data type for complex numbers.
*
* The data type is "immutable" so once you create and initialize
* a Complex object, you cannot change it. The "final" keyword
* when declaring re and im enforces this rule, making it a
* compile-time error to change the .re or .im fields after
* they've been initialized.
*
* Class based off of Princeton University's Complex.java class
* @author Aaron Gokaslan, Princeton University
*************************************************************************/
public class Complex {
private final double re; // the real part
private final double im; // the imaginary part
// create a new object with the given real and imaginary parts
public Complex(double real, double imag) {
re = real;
im = imag;
}
// return a string representation of the invoking Complex object
public String toString() {
if (im == 0) return re + "";
if (re == 0) return im + "i";
if (im < 0) return re + " - " + (-im) + "i";
return re + " + " + im + "i";
}
// return abs/modulus/magnitude and angle/phase/argument
public double abs() { return Math.hypot(re, im); } // Math.sqrt(re*re + im*im)
public double phase() { return Math.atan2(im, re); } // between -pi and pi
// return a new Complex object whose value is (this + b)
public Complex plus(Complex b) {
Complex a = this; // invoking object
double real = a.re + b.re;
double imag = a.im + b.im;
return new Complex(real, imag);
}
// return a new Complex object whose value is (this - b)
public Complex minus(Complex b) {
Complex a = this;
double real = a.re - b.re;
double imag = a.im - b.im;
return new Complex(real, imag);
}
// return a new Complex object whose value is (this * b)
public Complex times(Complex b) {
Complex a = this;
double real = a.re * b.re - a.im * b.im;
double imag = a.re * b.im + a.im * b.re;
return new Complex(real, imag);
}
// scalar multiplication
// return a new object whose value is (this * alpha)
public Complex times(double alpha) {
return new Complex(alpha * re, alpha * im);
}
// return a new Complex object whose value is the conjugate of this
public Complex conjugate() { return new Complex(re, -im); }
// return a new Complex object whose value is the reciprocal of this
public Complex reciprocal() {
double scale = re*re + im*im;
return new Complex(re / scale, -im / scale);
}
// return the real or imaginary part
public double re() { return re; }
public double im() { return im; }
// return a / b
public Complex divides(Complex b) {
Complex a = this;
return a.times(b.reciprocal());
}
// return a new Complex object whose value is the complex exponential of this
public Complex exp() {
return new Complex(Math.exp(re) * Math.cos(im), Math.exp(re) * Math.sin(im));
}
// return a new Complex object whose value is the complex sine of this
public Complex sin() {
return new Complex(Math.sin(re) * Math.cosh(im), Math.cos(re) * Math.sinh(im));
}
// return a new Complex object whose value is the complex cosine of this
public Complex cos() {
return new Complex(Math.cos(re) * Math.cosh(im), -Math.sin(re) * Math.sinh(im));
}
// return a new Complex object whose value is the complex tangent of this
public Complex tan() {
return sin().divides(cos());
}
// returns the magnitude of the imaginary number.
public double getMagnitude(){
return Math.sqrt(re*re+im*im);
}
public boolean equals(Complex other){
return (re==other.re) && (im==other.im);
}
}

View file

@ -1,133 +0,0 @@
package com.darkprograms.speech.util;
/*************************************************************************
* Compilation: javac FFT.java
* Execution: java FFT N
* Dependencies: Complex.java
*
* Compute the FFT and inverse FFT of a length N complex sequence.
* Bare bones implementation that runs in O(N log N) time. Our goal
* is to optimize the clarity of the code, rather than performance.
*
* Limitations
* -----------
* - assumes N is a power of 2
*
* - not the most memory efficient algorithm (because it uses
* an object type for representing complex numbers and because
* it re-allocates memory for the subarray, instead of doing
* in-place or reusing a single temporary array)
*
*************************************************************************/
/*************************************************************************
* @author Skylion implementation
* @author Princeton University for the actual algorithm.
************************************************************************/
public class FFT {
// compute the FFT of x[], assuming its length is a power of 2
public static Complex[] fft(Complex[] x) {
int N = x.length;
// base case
if (N == 1) return new Complex[] { x[0] };
// radix 2 Cooley-Tukey FFT
if (N % 2 != 0) { throw new RuntimeException("N is not a power of 2"); }
// fft of even terms
Complex[] even = new Complex[N/2];
for (int k = 0; k < N/2; k++) {
even[k] = x[2*k];
}
Complex[] q = fft(even);
// fft of odd terms
Complex[] odd = even; // reuse the array
for (int k = 0; k < N/2; k++) {
odd[k] = x[2*k + 1];
}
Complex[] r = fft(odd);
// combine
Complex[] y = new Complex[N];
for (int k = 0; k < N/2; k++) {
double kth = -2 * k * Math.PI / N;
Complex wk = new Complex(Math.cos(kth), Math.sin(kth));
y[k] = q[k].plus(wk.times(r[k]));
y[k + N/2] = q[k].minus(wk.times(r[k]));
}
return y;
}
// compute the inverse FFT of x[], assuming its length is a power of 2
public static Complex[] ifft(Complex[] x) {
int N = x.length;
Complex[] y = new Complex[N];
// take conjugate
for (int i = 0; i < N; i++) {
y[i] = x[i].conjugate();
}
// compute forward FFT
y = fft(y);
// take conjugate again
for (int i = 0; i < N; i++) {
y[i] = y[i].conjugate();
}
// divide by N
for (int i = 0; i < N; i++) {
y[i] = y[i].times(1.0 / N);
}
return y;
}
// compute the circular convolution of x and y
public static Complex[] cconvolve(Complex[] x, Complex[] y) {
// should probably pad x and y with 0s so that they have same length
// and are powers of 2
if (x.length != y.length) { throw new RuntimeException("Dimensions don't agree"); }
int N = x.length;
// compute FFT of each sequence
Complex[] a = fft(x);
Complex[] b = fft(y);
// point-wise multiply
Complex[] c = new Complex[N];
for (int i = 0; i < N; i++) {
c[i] = a[i].times(b[i]);
}
// compute inverse FFT
return ifft(c);
}
// compute the linear convolution of x and y
public static Complex[] convolve(Complex[] x, Complex[] y) {
Complex ZERO = new Complex(0, 0);
Complex[] a = new Complex[2*x.length];
for (int i = 0; i < x.length; i++) a[i] = x[i];
for (int i = x.length; i < 2*x.length; i++) a[i] = ZERO;
Complex[] b = new Complex[2*y.length];
for (int i = 0; i < y.length; i++) b[i] = y[i];
for (int i = y.length; i < 2*y.length; i++) b[i] = ZERO;
return cconvolve(a, b);
}
}

View file

@ -1,69 +0,0 @@
package com.darkprograms.speech.util;
/**
* A string utility class for commonly used methods.
* These methods are particularly useful for parsing.
* @author Skylion
*/
public class StringUtil {
private StringUtil() {};//Prevents instantiation
/**
* Removes quotation marks from beginning and end of string.
* @param s The string you want to remove the quotation marks from.
* @return The modified String.
*/
public static String stripQuotes(String s) {
int start = 0;
if( s.startsWith("\"") ) {
start = 1;
}
int end = s.length();
if( s.endsWith("\"") ) {
end = s.length() - 1;
}
return s.substring(start, end);
}
/**
* Returns the first instance of String found exclusively between part1 and part2.
* @param s The String you want to substring.
* @param part1 The beginning of the String you want to search for.
* @param part2 The end of the String you want to search for.
* @return The String between part1 and part2.
* If the s does not contain part1 or part2, the method returns null.
*/
public static String substringBetween(String s, String part1, String part2) {
String sub = null;
int i = s.indexOf(part1);
int j = s.indexOf(part2, i + part1.length());
if (i != -1 && j != -1) {
int nStart = i + part1.length();
sub = s.substring(nStart, j);
}
return sub;
}
/**
* Gets the string exclusively between the first instance of part1 and the last instance of part2.
* @param s The string you want to trim.
* @param part1 The term to trim after first instance.
* @param part2 The term to before last instance of.
* @return The trimmed String
*/
public static String trimString(String s, String part1, String part2){
if(!s.contains(part1) || !s.contains(part2)){
return null;
}
int first = s.indexOf(part1) + part1.length() + 1;
String tmp = s.substring(first);
int last = tmp.lastIndexOf(part2);
tmp = tmp.substring(0, last);
return tmp;
}
}

Binary file not shown.

View file

@ -1,55 +0,0 @@
Sphinx-4 Speech Recognition System
-------------------------------------------------------------------
Sphinx-4 is a state-of-the-art, speaker-independent, continuous speech
recognition system written entirely in the Java programming language. It
was created via a joint collaboration between the Sphinx group at
Carnegie Mellon University, Sun Microsystems Laboratories, Mitsubishi
Electric Research Labs (MERL), and Hewlett Packard (HP), with
contributions from the University of California at Santa Cruz (UCSC) and
the Massachusetts Institute of Technology (MIT).
The design of Sphinx-4 is based on patterns that have emerged from the
design of past systems as well as new requirements based on areas that
researchers currently want to explore. To exercise this framework, and
to provide researchers with a "research-ready" system, Sphinx-4 also
includes several implementations of both simple and state-of-the-art
techniques. The framework and the implementations are all freely
available via open source under a very generous BSD-style license.
Because it is written entirely in the Java programming language, Sphinx-4
can run on a variety of platforms without requiring any special
compilation or changes. We've tested Sphinx-4 on the following platforms
with success.
To get started with sphinx4 visit our wiki
http://cmusphinx.sourceforge.net/wiki
Please give Sphinx-4 a try and post your questions, comments, and
feedback to one of the CMU Sphinx Forums:
http://sourceforge.net/p/cmusphinx/discussion/sphinx4
We can also be reached at cmusphinx-devel@lists.sourceforge.net.
Sincerely,
The Sphinx-4 Team:
(in alph. order)
Evandro Gouvea, CMU (developer and speech advisor)
Peter Gorniak, MIT (developer)
Philip Kwok, Sun Labs (developer)
Paul Lamere, Sun Labs (design/technical lead)
Beth Logan, HP (speech advisor)
Pedro Moreno, Google (speech advisor)
Bhiksha Raj, MERL (design lead)
Mosur Ravishankar, CMU (speech advisor)
Bent Schmidt-Nielsen, MERL (speech advisor)
Rita Singh, CMU/MIT (design/speech advisor)
JM Van Thong, HP (speech advisor)
Willie Walker, Sun Labs (overall lead)
Manfred Warmuth, USCS (speech advisor)
Joe Woelfel, MERL (developer and speech advisor)
Peter Wolf, MERL (developer and speech advisor)

View file

@ -1,193 +0,0 @@
Sphinx-4 Speech Recognition System
-------------------------------------------------------------------
Version: 1.0Beta6
Release Date: March 2011
-------------------------------------------------------------------
New Features and Improvements:
* SRGS/GrXML support, more to come soon with support for JSAPI2
* Model layout is unified with Pocketsphinx/Sphinxtrain
* Netbeans project files are included
* Language models can be loaded from URI
* Batch testing application allows testing inside Sphinxtrain
Bug Fixes:
* Flat linguist accuracy issue fixed
* Intelligent sorting in paritioner fixes stack overflow when tokens
have identical scores
* Various bug fixes
Thanks:
Timo Bauman, Nasir Hussain, Michele Alessandrini, Evandro Goueva,
Stephen Marquard, Larry A. Taylor, Yuri Orlov, Dirk Schnelle-Walka,
James Chivers, Firas Al Khalil
-------------------------------------------------------------------
Version: 1.0Beta5
Release Date: August 2010
-------------------------------------------------------------------
New Features and Improvements:
* Alignment demo and grammar to align long speech recordings to
transcription and get word times
* Lattice grammar for multipass decoding
* Explicit-backoff in LexTree linguist
* Significant LVCSR speedup with proper LexTree compression
* Simple filter to drop zero energy frames
* Graphviz for grammar dump vizualization instead of AISee
* Voxforge decoding accuracy test
* Lattice scoring speedup
* JSAPI-free JSGF parser
Bug Fixes:
* Insertion probabilities are counted in lattice scores
* Don't waste resources and memory on dummy acoustic model
transformations
* Small DMP files are loaded properly
* JSGF parser fixes
* Documentation improvements
* Debian package stuff
Thanks:
Antoine Raux, Marek Lesiak, Yaniv Kunda, Brian Romanowski, Tony
Robinson, Bhiksha Raj, Timo Baumann, Michele Alessandrini, Francisco
Aguilera, Peter Wolf, David Huggins-Daines, Dirk Schnelle-Walka.
-------------------------------------------------------------------
Version: 1.0Beta4
Release Date: February 2010
-------------------------------------------------------------------
New Features and Improvements:
* Large arbitrary-order language models
* Simplified and reworked model loading code
* Raw configuration and and demos
* HTK model loader
* A lot of code optimizations
* JSAPI-independent JSGF parser
* Noise filtering components
* Lattice rescoring
* Server-based language model
Bug fixes:
* Lots of bug fixes: PLP extraction, race-conditions
in scoring, etc.
Thanks:
Peter Wolf, Yaniv Kunda, Antoine Raux, Dirk Schnelle-Walka,
Yannick Estève, Anthony Rousseau and LIUM team, Christophe Cerisara.
-------------------------------------------------------------------
Version: 1.0Beta3
Release Date: August 2009
-------------------------------------------------------------------
New Features and Improvements:
* BatchAGC frontend component
* Completed transition to defaults in annotations
* ConcatFeatureExtrator to cooperate with cepwin models
* End of stream signals are passed to the decoder to fix cancellation
* Timer API improvement
* Threading policy is changed to TAS
Bug fixes:
* Fixes reading UTF-8 from language model dump.
* Huge memory optimization of the lattice compression
* More stable fronend work with DataStart and DataEnd and optional
SpeechStart/SpeechEnd
Thanks:
Yaniv Kunda, Michele Alessandrini, Holger Brandl, Timo Baumann,
Evandro Gouvea
-------------------------------------------------------------------
Version: 1.0Beta2
Release Date: February 2009
-------------------------------------------------------------------
New Features and Improvments:
* new much cleaner and more robust configuration system
* migrated to java5
* xml-free instantiation of new systems
* improved feature extraction (better voice activity detection, many bugfixes)
* Cleaned up some of the core APIs
* include-tag for configuration files
* better JavaSound support
* fully qualified grammar names in JSGF (Roger Toenz)
* support for dictionary addenda in the FastDictionary (Gregg Liming)
* added batch tools for measuring performance on NIST corpus with CTL files
* many perforamnce and stability improvments
-------------------------------------------------------------------
Version: 1.0Beta
Release Date: September 2004
-------------------------------------------------------------------
New Features:
* Confidence scoring
* Posterior probability computation
* Sausage creation from a lattice
* Dynamic grammars
* Narrow bandwidth acoustic model
* Out-of-grammar utterance rejection
* More demonstration programs
* WSJ5K Language model
Improvements:
* Better control over microphone selection
* JSGF limitations removed
* Improved performance for large, perplex JSGF grammars
* Added Filler support for JSGF Grammars
* Ability to configure microphone input
* Added ECMAScript Action Tags support and demos.
Bug fixes:
* Lots of bug fixes
Documentation:
* Added the Sphinx-4 FAQ
* Added scripts and instructions for building a WSJ5k language model
from LDC data.
Thanks:
* Peter Gorniak, Willie Walker, Philip Kwok, Paul Lamere
-------------------------------------------------------------------
Version: 0.1alpha
Release Date: June 2004
-------------------------------------------------------------------
Initial release

View file

@ -1,88 +0,0 @@
Speaker Adaptation with MLLR Transformation
Unsupervised speaker adaptation for Sphinx4
For building an improved acoustic model there are two methods. One of them
needs to collect data from a speaker and train the acoustic model set. Thus
using the speaker's characteristics the recognition will be more accurately.
The disadvantage of this method is that it needs a large amount of data to be
collected to have a sufficient model accuracy.
The other method, when the amount of data available is small from a new
speaker, is to collect them and by using an adaptation technique to adapt the
model set to better fit the speaker's characteristics.
The adaptation technique used is MLLR (maximum likelihood linear regression)
transform that is applied depending on the available data by generating one or
more transformations that reduce the mismatch between
an initial model set and the adaptation data. There is only one transformation
when the amount of available data is too small and is called global adaptation
transform. The global transform is applied to every Gaussian component in the
model set. Otherwise, when the amount of adaptation data is large, the number
of transformations is increasing and each transformation is applied to a
certain cluster of Gaussian components.
To be able to decode with an adapted model there are two important classes that
should be imported:
import edu.cmu.sphinx.decoder.adaptation.Stats;
import edu.cmu.sphinx.decoder.adaptation.Transform;
Stats Class estimates a MLLR transform for each cluster of data and the
transform will be applied to the corresponding cluster. You can choose the
number of clusters by giving the number as argument to
createStats(nrOfClusters) in Stats method. The method will return an object
that contains the loaded acoustic model and the number of clusters. This
important to collect counts from each Result object because based on them we
will perform the estimation of the MLLR transformation.
Before starting collect counts it is important to have all Gaussians clustered.
So, createStats(nrOfClusters) will generate an ClusteredDensityFileData object
to prepare the Gaussians. ClusteredDensityFileData class performs the clustering
using the "k-means" clustering algorithm. The k-means clustering algorithm aims
to partition the Gaussians into k clusters in which each Gaussian belongs
to the cluster with the nearest mean. It is interesting to know that the problem
of clustering is computationally difficult, so the heuristic used is the
Euclidean criterion.
The next step is to collect counts from each Result object and store them
separately for each cluster. Here, the matrices regLs and regRs used in
computing the transformation are filled. Transform class performs the actual
transformation for each cluster. Given the counts previously gathered and the
number of clusters, the class will compute the two matrices A (the
transformation matrix) B (the bias vector) that are tied across the Gaussians
from the corresponding cluster. A Transform object will contain all the
transformations computed for an utterance. To use the adapted acoustic model it
is necessary to update the Sphinx3Loader which is responsible for
loading the files from the model. When updating occurs, the acoustic model is
already loaded, so setTransform(transform) method will replace the old means
with the new ones.
Now, that we have the theoretical part, lets see the practical part. Here is
how you create and use a MLLR transformation:
Stats stats = recognizer.createStats(1);
recognizer.startRecognition(stream);
while ((result = recognizer.getResult()) != null) {
stats.collect(result);
}
recognizer.stopRecognition();
// Transform represents the speech profile
Transform transform = stats.createTransform();
recognizer.setTransform(transform);
After setting the transformation to the StreamSpeechRecognizer object,
the recognizer is ready to decode using the new means. The process
of recognition is the same as you decode with the general acoustic model.
When you create and set a transformation is like you create a
new acoustic model with speaker's characteristics, thus the accuracy
will be better.
For further decodings you can store the transformation of a speaker in a file
by performing store(“FilePath”, 0) in Transform object.
If you have your own transformation known as mllr_matrix previously generated
with Sphinx4 or with another program, you can load the file by performing
load(“FilePath”) in Transform object and then to set it to an Recognizer object.

View file

@ -1,40 +0,0 @@
Copyright 1999-2015 Carnegie Mellon University.
Portions Copyright 2002-2008 Sun Microsystems, Inc.
Portions Copyright 2002-2008 Mitsubishi Electric Research Laboratories.
Portions Copyright 2013-2015 Alpha Cephei, Inc.
All Rights Reserved. Use is subject to license terms.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
3. Original authors' names are not deleted.
4. The authors' names are not used to endorse or promote products
derived from this software without specific prior written
permission.
This work was supported in part by funding from the Defense Advanced
Research Projects Agency and the National Science Foundation of the
United States of America, the CMU Sphinx Speech Consortium, and
Sun Microsystems, Inc.
CARNEGIE MELLON UNIVERSITY, SUN MICROSYSTEMS, INC., MITSUBISHI
ELECTRONIC RESEARCH LABORATORIES AND THE CONTRIBUTORS TO THIS WORK
DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
CARNEGIE MELLON UNIVERSITY, SUN MICROSYSTEMS, INC., MITSUBISHI
ELECTRONIC RESEARCH LABORATORIES NOR THE CONTRIBUTORS BE LIABLE FOR
ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View file

@ -1,88 +0,0 @@
<project
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.sonatype.oss</groupId>
<artifactId>oss-parent</artifactId>
<version>7</version>
</parent>
<groupId>edu.cmu.sphinx</groupId>
<artifactId>sphinx4-parent</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Sphinx4</name>
<url>http://cmusphinx.sourceforge.net</url>
<modules>
<module>sphinx4-core</module>
<module>sphinx4-data</module>
<module>sphinx4-samples</module>
</modules>
<dependencies>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.8.8</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-library</artifactId>
<version>1.3</version>
<scope>test</scope>
</dependency>
</dependencies>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.scm.root>svn.code.sf.net/p/cmusphinx/code/trunk/sphinx4</project.scm.root>
</properties>
<scm>
<connection>scm:svn:http://${project.scm.root}</connection>
<developerConnection>scm:svn:svn+ssh://${project.scm.root}</developerConnection>
<url>http://${project.scm.root}</url>
</scm>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.2.1</version>
<executions>
<execution>
<id>attach-sources</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<version>2.9.1</version>
<executions>
<execution>
<id>attach-javadocs</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

View file

@ -1,34 +0,0 @@
<project
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>edu.cmu.sphinx</groupId>
<artifactId>sphinx4-parent</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>sphinx4-core</artifactId>
<packaging>jar</packaging>
<name>Sphinx4 core</name>
<dependencies>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-math3</artifactId>
<version>3.2</version>
</dependency>
<dependency>
<groupId>edu.cmu.sphinx</groupId>
<artifactId>sphinx4-data</artifactId>
<version>1.0-SNAPSHOT</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>

View file

@ -1,355 +0,0 @@
/*
* Copyright 2014 Alpha Cephei Inc.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.alignment;
import static java.lang.Math.abs;
import static java.lang.Math.max;
import static java.lang.Math.min;
import static java.util.Arrays.fill;
import static java.util.Collections.emptyList;
import java.util.*;
import edu.cmu.sphinx.util.Range;
import edu.cmu.sphinx.util.Utilities;
/**
*
* @author Alexander Solovets
*/
public class LongTextAligner {
private final class Alignment {
public final class Node {
private final int databaseIndex;
private final int queryIndex;
private Node(int row, int column) {
this.databaseIndex = column;
this.queryIndex = row;
}
public int getDatabaseIndex() {
return shifts.get(databaseIndex - 1);
}
public int getQueryIndex() {
return indices.get(queryIndex - 1);
}
public String getQueryWord() {
if (queryIndex > 0)
return query.get(getQueryIndex());
return null;
}
public String getDatabaseWord() {
if (databaseIndex > 0)
return reftup.get(getDatabaseIndex());
return null;
}
public int getValue() {
if (isBoundary())
return max(queryIndex, databaseIndex);
return hasMatch() ? 0 : 1;
}
public boolean hasMatch() {
return getQueryWord().equals(getDatabaseWord());
}
public boolean isBoundary() {
return queryIndex == 0 || databaseIndex == 0;
}
public boolean isTarget() {
return queryIndex == indices.size() &&
databaseIndex == shifts.size();
}
public List<Node> adjacent() {
List<Node> result = new ArrayList<Node>(3);
if (queryIndex < indices.size() &&
databaseIndex < shifts.size()) {
result.add(new Node(queryIndex + 1, databaseIndex + 1));
}
if (databaseIndex < shifts.size()) {
result.add(new Node(queryIndex, databaseIndex + 1));
}
if (queryIndex < indices.size()) {
result.add(new Node(queryIndex + 1, databaseIndex));
}
return result;
}
@Override
public boolean equals(Object object) {
if (!(object instanceof Node))
return false;
Node other = (Node) object;
return queryIndex == other.queryIndex &&
databaseIndex == other.databaseIndex;
}
@Override
public int hashCode() {
return 31 * (31 * queryIndex + databaseIndex);
}
@Override
public String toString() {
return String.format("[%d %d]", queryIndex, databaseIndex);
}
}
private final List<Integer> shifts;
private final List<String> query;
private final List<Integer> indices;
private final List<Node> alignment;
public Alignment(List<String> query, Range range) {
this.query = query;
indices = new ArrayList<Integer>();
Set<Integer> shiftSet = new TreeSet<Integer>();
for (int i = 0; i < query.size(); i++) {
if (tupleIndex.containsKey(query.get(i))) {
indices.add(i);
for (Integer shift : tupleIndex.get(query.get(i))) {
if (range.contains(shift))
shiftSet.add(shift);
}
}
}
shifts = new ArrayList<Integer>(shiftSet);
final Map<Node, Integer> cost = new HashMap<Node, Integer>();
PriorityQueue<Node> openSet = new PriorityQueue<Node>(1, new Comparator<Node>() {
@Override
public int compare(Node o1, Node o2) {
return cost.get(o1).compareTo(cost.get(o2));
}
});
Collection<Node> closedSet = new HashSet<Node>();
Map<Node, Node> parents = new HashMap<Node, Node>();
Node startNode = new Node(0, 0);
cost.put(startNode, 0);
openSet.add(startNode);
while (!openSet.isEmpty()) {
Node q = openSet.poll();
if (closedSet.contains(q))
continue;
if (q.isTarget()) {
List<Node> backtrace = new ArrayList<Node>();
while (parents.containsKey(q)) {
if (!q.isBoundary() && q.hasMatch())
backtrace.add(q);
q = parents.get(q);
}
alignment = new ArrayList<Node>(backtrace);
Collections.reverse(alignment);
return;
}
closedSet.add(q);
for (Node nb : q.adjacent()) {
if (closedSet.contains(nb))
continue;
// FIXME: move to appropriate location
int l = abs(indices.size() - shifts.size() - q.queryIndex +
q.databaseIndex) -
abs(indices.size() - shifts.size() -
nb.queryIndex +
nb.databaseIndex);
Integer oldScore = cost.get(nb);
Integer qScore = cost.get(q);
if (oldScore == null)
oldScore = Integer.MAX_VALUE;
if (qScore == null)
qScore = Integer.MAX_VALUE;
int newScore = qScore + nb.getValue() - l;
if (newScore < oldScore) {
cost.put(nb, newScore);
openSet.add(nb);
parents.put(nb, q);
}
}
}
alignment = emptyList();
}
public List<Node> getIndices() {
return alignment;
}
}
private final int tupleSize;
private final List<String> reftup;
private final HashMap<String, ArrayList<Integer>> tupleIndex;
private List<String> refWords;
/**
* Constructs new text aligner that servers requests for alignment of
* sequence of words with the provided database sequence. Sequences are
* aligned by tuples comprising one or more subsequent words.
*
* @param words list of words forming the database
* @param tupleSize size of a tuple, must be greater or equal to 1
*/
public LongTextAligner(List<String> words, int tupleSize) {
assert words != null;
assert tupleSize > 0;
this.tupleSize = tupleSize;
this.refWords = words;
int offset = 0;
reftup = getTuples(words);
tupleIndex = new HashMap<String, ArrayList<Integer>>();
for (String tuple : reftup) {
ArrayList<Integer> indexes = tupleIndex.get(tuple);
if (indexes == null) {
indexes = new ArrayList<Integer>();
tupleIndex.put(tuple, indexes);
}
indexes.add(offset++);
}
}
/**
* Aligns query sequence with the previously built database.
* @param query list of words to look for
*
* @return indices of alignment
*/
public int[] align(List<String> query) {
return align(query, new Range(0, refWords.size()));
}
/**
* Aligns query sequence with the previously built database.
* @param words list words to look for
* @param range range of database to look for alignment
*
* @return indices of alignment
*/
public int[] align(List<String> words, Range range) {
if (range.upperEndpoint() - range.lowerEndpoint() < tupleSize || words.size() < tupleSize) {
return alignTextSimple(refWords.subList(range.lowerEndpoint(), range.upperEndpoint()), words, range.lowerEndpoint());
}
int[] result = new int[words.size()];
fill(result, -1);
int lastIndex = 0;
for (Alignment.Node node : new Alignment(getTuples(words), range)
.getIndices()) {
// for (int j = 0; j < tupleSize; ++j)
lastIndex = max(lastIndex, node.getQueryIndex());
for (; lastIndex < node.getQueryIndex() + tupleSize; ++lastIndex)
result[lastIndex] = node.getDatabaseIndex() + lastIndex -
node.getQueryIndex();
}
return result;
}
/**
* Makes list of tuples of the given size out of list of words.
*
* @param words words
* @return list of tuples of size {@link #tupleSize}
*/
private List<String> getTuples(List<String> words) {
List<String> result = new ArrayList<String>();
LinkedList<String> tuple = new LinkedList<String>();
Iterator<String> it = words.iterator();
for (int i = 0; i < tupleSize - 1; i++) {
tuple.add(it.next());
}
while (it.hasNext()) {
tuple.addLast(it.next());
result.add(Utilities.join(tuple));
tuple.removeFirst();
}
return result;
}
static int[] alignTextSimple(List<String> database, List<String> query,
int offset) {
int n = database.size() + 1;
int m = query.size() + 1;
int[][] f = new int[n][m];
f[0][0] = 0;
for (int i = 1; i < n; ++i) {
f[i][0] = i;
}
for (int j = 1; j < m; ++j) {
f[0][j] = j;
}
for (int i = 1; i < n; ++i) {
for (int j = 1; j < m; ++j) {
int match = f[i - 1][j - 1];
String refWord = database.get(i - 1);
String queryWord = query.get(j - 1);
if (!refWord.equals(queryWord)) {
++match;
}
int insert = f[i][j - 1] + 1;
int delete = f[i - 1][j] + 1;
f[i][j] = min(match, min(insert, delete));
}
}
--n;
--m;
int[] alignment = new int[m];
Arrays.fill(alignment, -1);
while (m > 0) {
if (n == 0) {
--m;
} else {
String refWord = database.get(n - 1);
String queryWord = query.get(m - 1);
if (f[n - 1][m - 1] <= f[n - 1][m - 1]
&& f[n - 1][m - 1] <= f[n][m - 1]
&& refWord.equals(queryWord)) {
alignment[--m] = --n + offset;
} else {
if (f[n - 1][m] < f[n][m - 1]) {
--n;
} else {
--m;
}
}
}
}
return alignment;
}
}

View file

@ -1,36 +0,0 @@
/*
* Copyright 2014 Alpha Cephei Inc.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment;
import java.util.Arrays;
import java.util.List;
public class SimpleTokenizer implements TextTokenizer {
public List<String> expand(String text) {
text = text.replace('', '\'');
text = text.replace('', ' ');
text = text.replace('”', ' ');
text = text.replace('“', ' ');
text = text.replace('"', ' ');
text = text.replace('»', ' ');
text = text.replace('«', ' ');
text = text.replace('', '-');
text = text.replace('—', ' ');
text = text.replace('…', ' ');
text = text.replace(" - ", " ");
text = text.replaceAll("[/_*%]", " ");
text = text.toLowerCase();
String[] tokens = text.split("[.,?:!;()]");
return Arrays.asList(tokens);
}
}

View file

@ -1,25 +0,0 @@
/*
* Copyright 2014 Alpha Cephei Inc.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.alignment;
import java.util.List;
public interface TextTokenizer {
/**
* Cleans the text and returns the list of lines
*
* @param text Input text
* @return a list of lines in the text.
*/
List<String> expand(String text);
}

View file

@ -1,158 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment;
/**
* Contains a parsed token from a Tokenizer.
*/
public class Token {
private String token = null;
private String whitespace = null;
private String prepunctuation = null;
private String postpunctuation = null;
private int position = 0; // position in the original input text
private int lineNumber = 0;
/**
* Returns the whitespace characters of this Token.
*
* @return the whitespace characters of this Token; null if this Token does
* not use whitespace characters
*/
public String getWhitespace() {
return whitespace;
}
/**
* Returns the prepunctuation characters of this Token.
*
* @return the prepunctuation characters of this Token; null if this Token
* does not use prepunctuation characters
*/
public String getPrepunctuation() {
return prepunctuation;
}
/**
* Returns the postpunctuation characters of this Token.
*
* @return the postpunctuation characters of this Token; null if this Token
* does not use postpunctuation characters
*/
public String getPostpunctuation() {
return postpunctuation;
}
/**
* Returns the position of this token in the original input text.
*
* @return the position of this token in the original input text
*/
public int getPosition() {
return position;
}
/**
* Returns the line of this token in the original text.
*
* @return the line of this token in the original text
*/
public int getLineNumber() {
return lineNumber;
}
/**
* Sets the whitespace characters of this Token.
*
* @param whitespace the whitespace character for this token
*/
public void setWhitespace(String whitespace) {
this.whitespace = whitespace;
}
/**
* Sets the prepunctuation characters of this Token.
*
* @param prepunctuation the prepunctuation characters
*/
public void setPrepunctuation(String prepunctuation) {
this.prepunctuation = prepunctuation;
}
/**
* Sets the postpunctuation characters of this Token.
*
* @param postpunctuation the postpunctuation characters
*/
public void setPostpunctuation(String postpunctuation) {
this.postpunctuation = postpunctuation;
}
/**
* Sets the position of the token in the original input text.
*
* @param position the position of the input text
*/
public void setPosition(int position) {
this.position = position;
}
/**
* Set the line of this token in the original text.
*
* @param lineNumber the line of this token in the original text
*/
public void setLineNumber(int lineNumber) {
this.lineNumber = lineNumber;
}
/**
* Returns the string associated with this token.
*
* @return the token if it exists; otherwise null
*/
public String getWord() {
return token;
}
/**
* Sets the string of this Token.
*
* @param word the word for this token
*/
public void setWord(String word) {
token = word;
}
/**
* Converts this token to a string.
*
* @return the string representation of this object
*/
public String toString() {
StringBuffer fullToken = new StringBuffer();
if (whitespace != null) {
fullToken.append(whitespace);
}
if (prepunctuation != null) {
fullToken.append(prepunctuation);
}
if (token != null) {
fullToken.append(token);
}
if (postpunctuation != null) {
fullToken.append(postpunctuation);
}
return fullToken.toString();
}
}

View file

@ -1,405 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.io.IOException;
import java.io.Reader;
import java.util.Iterator;
import edu.cmu.sphinx.alignment.Token;
/**
* Implements the tokenizer interface. Breaks an input sequence of characters
* into a set of tokens.
*/
public class CharTokenizer implements Iterator<Token> {
/** A constant indicating that the end of the stream has been read. */
public static final int EOF = -1;
/** A string containing the default whitespace characters. */
public static final String DEFAULT_WHITESPACE_SYMBOLS = " \t\n\r";
/** A string containing the default single characters. */
public static final String DEFAULT_SINGLE_CHAR_SYMBOLS = "(){}[]";
/** A string containing the default pre-punctuation characters. */
public static final String DEFAULT_PREPUNCTUATION_SYMBOLS = "\"'`({[";
/** A string containing the default post-punctuation characters. */
public static final String DEFAULT_POSTPUNCTUATION_SYMBOLS =
"\"'`.,:;!?(){}[]";
/** The line number. */
private int lineNumber;
/** The input text (from the Utterance) to tokenize. */
private String inputText;
/** The file to read input text from, if using file mode. */
private Reader reader;
/** The current character, whether its from the file or the input text. */
private int currentChar;
/**
* The current char position for the input text (not the file) this is
* called "file_pos" in flite
*/
private int currentPosition;
/** The delimiting symbols of this tokenizer. */
private String whitespaceSymbols = DEFAULT_WHITESPACE_SYMBOLS;
private String singleCharSymbols = DEFAULT_SINGLE_CHAR_SYMBOLS;
private String prepunctuationSymbols = DEFAULT_PREPUNCTUATION_SYMBOLS;
private String postpunctuationSymbols = DEFAULT_POSTPUNCTUATION_SYMBOLS;
/** The error description. */
private String errorDescription;
/** A place to store the current token. */
private Token token;
private Token lastToken;
/**
* Constructs a Tokenizer.
*/
public CharTokenizer() {}
/**
* Creates a tokenizer that will return tokens from the given string.
*
* @param string the string to tokenize
*/
public CharTokenizer(String string) {
setInputText(string);
}
/**
* Creates a tokenizer that will return tokens from the given file.
*
* @param file where to read the input from
*/
public CharTokenizer(Reader file) {
setInputReader(file);
}
/**
* Sets the whitespace symbols of this Tokenizer to the given symbols.
*
* @param symbols the whitespace symbols
*/
public void setWhitespaceSymbols(String symbols) {
whitespaceSymbols = symbols;
}
/**
* Sets the single character symbols of this Tokenizer to the given
* symbols.
*
* @param symbols the single character symbols
*/
public void setSingleCharSymbols(String symbols) {
singleCharSymbols = symbols;
}
/**
* Sets the prepunctuation symbols of this Tokenizer to the given symbols.
*
* @param symbols the prepunctuation symbols
*/
public void setPrepunctuationSymbols(String symbols) {
prepunctuationSymbols = symbols;
}
/**
* Sets the postpunctuation symbols of this Tokenizer to the given symbols.
*
* @param symbols the postpunctuation symbols
*/
public void setPostpunctuationSymbols(String symbols) {
postpunctuationSymbols = symbols;
}
/**
* Sets the text to tokenize.
*
* @param inputString the string to tokenize
*/
public void setInputText(String inputString) {
inputText = inputString;
currentPosition = 0;
if (inputText != null) {
getNextChar();
}
}
/**
* Sets the input reader
*
* @param reader the input source
*/
public void setInputReader(Reader reader) {
this.reader = reader;
getNextChar();
}
/**
* Returns the next token.
*
* @return the next token if it exists, <code>null</code> if no more tokens
*/
public Token next() {
lastToken = token;
token = new Token();
// Skip whitespace
token.setWhitespace(getTokenOfCharClass(whitespaceSymbols));
// quoted strings currently ignored
// get prepunctuation
token.setPrepunctuation(getTokenOfCharClass(prepunctuationSymbols));
// get the symbol itself
if (singleCharSymbols.indexOf(currentChar) != -1) {
token.setWord(String.valueOf((char) currentChar));
getNextChar();
} else {
token.setWord(getTokenNotOfCharClass(whitespaceSymbols));
}
token.setPosition(currentPosition);
token.setLineNumber(lineNumber);
// This'll have token *plus* postpunctuation
// Get postpunctuation
removeTokenPostpunctuation();
return token;
}
/**
* Returns <code>true</code> if there are more tokens, <code>false</code>
* otherwise.
*
* @return <code>true</code> if there are more tokens <code>false</code>
* otherwise
*/
public boolean hasNext() {
int nextChar = currentChar;
return (nextChar != EOF);
}
public void remove() {
throw new UnsupportedOperationException();
}
/**
* Advances the currentPosition pointer by 1 (if not exceeding length of
* inputText, and returns the character pointed by currentPosition.
*
* @return the next character EOF if no more characters exist
*/
private int getNextChar() {
if (reader != null) {
try {
int readVal = reader.read();
if (readVal == -1) {
currentChar = EOF;
} else {
currentChar = (char) readVal;
}
} catch (IOException ioe) {
currentChar = EOF;
errorDescription = ioe.getMessage();
}
} else if (inputText != null) {
if (currentPosition < inputText.length()) {
currentChar = (int) inputText.charAt(currentPosition);
} else {
currentChar = EOF;
}
}
if (currentChar != EOF) {
currentPosition++;
}
if (currentChar == '\n') {
lineNumber++;
}
return currentChar;
}
/**
* Starting from the current position of the input text, returns the
* subsequent characters of type charClass, and not of type
* singleCharSymbols.
*
* @param charClass the type of characters to look for
* @param buffer the place to append characters of type charClass
*
* @return a string of characters starting from the current position of the
* input text, until it encounters a character not in the string
* charClass
*
*/
private String getTokenOfCharClass(String charClass) {
return getTokenByCharClass(charClass, true);
}
/**
* Starting from the current position of the input text/file, returns the
* subsequent characters, not of type singleCharSymbols, and ended at
* characters of type endingCharClass. E.g., if the current string is
* "xxxxyyy", endingCharClass is "yz", and singleCharClass "abc". Then this
* method will return to "xxxx".
*
* @param endingCharClass the type of characters to look for
*
* @return a string of characters from the current position until it
* encounters characters in endingCharClass
*
*/
private String getTokenNotOfCharClass(String endingCharClass) {
return getTokenByCharClass(endingCharClass, false);
}
/**
* Provides a `compressed' method from getTokenOfCharClass() and
* getTokenNotOfCharClass(). If parameter containThisCharClass is
* <code>true</code>, then a string from the current position to the last
* character in charClass is returned. If containThisCharClass is
* <code>false</code> , then a string before the first occurrence of a
* character in containThisCharClass is returned.
*
* @param charClass the string of characters you want included or excluded
* in your return
* @param containThisCharClass determines if you want characters in
* charClass in the returned string or not
*
* @return a string of characters from the current position until it
* encounters characters in endingCharClass
*/
private String getTokenByCharClass(String charClass,
boolean containThisCharClass) {
final StringBuilder buffer = new StringBuilder();
// if we want the returned string to contain chars in charClass, then
// containThisCharClass is TRUE and
// (charClass.indexOf(currentChar) != 1) == containThisCharClass)
// returns true; if we want it to stop at characters of charClass,
// then containThisCharClass is FALSE, and the condition returns
// false.
while ((charClass.indexOf(currentChar) != -1) == containThisCharClass
&& singleCharSymbols.indexOf(currentChar) == -1
&& currentChar != EOF) {
buffer.append((char) currentChar);
getNextChar();
}
return buffer.toString();
}
/**
* Removes the postpunctuation characters from the current token. Copies
* those postpunctuation characters to the class variable
* 'postpunctuation'.
*/
private void removeTokenPostpunctuation() {
if (token == null) {
return;
}
final String tokenWord = token.getWord();
int tokenLength = tokenWord.length();
int position = tokenLength - 1;
while (position > 0
&& postpunctuationSymbols.indexOf((int) tokenWord
.charAt(position)) != -1) {
position--;
}
if (tokenLength - 1 != position) {
// Copy postpunctuation from token
token.setPostpunctuation(tokenWord.substring(position + 1));
// truncate token at postpunctuation
token.setWord(tokenWord.substring(0, position + 1));
} else {
token.setPostpunctuation("");
}
}
/**
* Returns <code>true</code> if there were errors while reading tokens
*
* @return <code>true</code> if there were errors; <code>false</code>
* otherwise
*/
public boolean hasErrors() {
return errorDescription != null;
}
/**
* if hasErrors returns <code>true</code>, this will return a description
* of the error encountered, otherwise it will return <code>null</code>
*
* @return a description of the last error that occurred.
*/
public String getErrorDescription() {
return errorDescription;
}
/**
* Determines if the current token should start a new sentence.
*
* @return <code>true</code> if a new sentence should be started
*/
public boolean isSentenceSeparator() {
String tokenWhiteSpace = token.getWhitespace();
String lastTokenPostpunctuation = null;
if (lastToken != null) {
lastTokenPostpunctuation = lastToken.getPostpunctuation();
}
if (lastToken == null || token == null) {
return false;
} else if (tokenWhiteSpace.indexOf('\n') != tokenWhiteSpace
.lastIndexOf('\n')) {
return true;
} else if (lastTokenPostpunctuation.indexOf(':') != -1
|| lastTokenPostpunctuation.indexOf('?') != -1
|| lastTokenPostpunctuation.indexOf('!') != -1) {
return true;
} else if (lastTokenPostpunctuation.indexOf('.') != -1
&& tokenWhiteSpace.length() > 1
&& Character.isUpperCase(token.getWord().charAt(0))) {
return true;
} else {
String lastWord = lastToken.getWord();
int lastWordLength = lastWord.length();
if (lastTokenPostpunctuation.indexOf('.') != -1
&&
/* next word starts with a capital */
Character.isUpperCase(token.getWord().charAt(0))
&&
/* last word isn't an abbreviation */
!(Character.isUpperCase(lastWord
.charAt(lastWordLength - 1)) || (lastWordLength < 4 && Character
.isUpperCase(lastWord.charAt(0))))) {
return true;
}
}
return false;
}
}

View file

@ -1,608 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.io.*;
import java.net.URL;
import java.util.StringTokenizer;
import java.util.logging.Logger;
import java.util.regex.Pattern;
/**
* Implementation of a Classification and Regression Tree (CART) that is used
* more like a binary decision tree, with each node containing a decision or a
* final value. The decision nodes in the CART trees operate on an Item and
* have the following format:
*
* <pre>
* NODE feat operand value qfalse
* </pre>
*
* <p>
* Where <code>feat</code> is an string that represents a feature to pass to
* the <code>findFeature</code> method of an item.
*
* <p>
* The <code>value</code> represents the value to be compared against the
* feature obtained from the item via the <code>feat</code> string. The
* <code>operand</code> is the operation to do the comparison. The available
* operands are as follows:
*
* <ul>
* <li>&lt; - the feature is less than value
* <li>=- the feature is equal to the value
* <li>&gt;- the feature is greater than the value
* <li>MATCHES - the feature matches the regular expression stored in value
* <li>IN - [[[TODO: still guessing because none of the CART's in Flite seem to
* use IN]]] the value is in the list defined by the feature.
* </ul>
*
* <p>
* [[[TODO: provide support for the IN operator.]]]
*
* <p>
* For &lt; and &gt;, this CART coerces the value and feature to float's. For =,
* this CART coerces the value and feature to string and checks for string
* equality. For MATCHES, this CART uses the value as a regular expression and
* compares the obtained feature to that.
*
* <p>
* A CART is represented by an array in this implementation. The
* <code>qfalse</code> value represents the index of the array to go to if the
* comparison does not match. In this implementation, qtrue index is always
* implied, and represents the next element in the array. The root node of the
* CART is the first element in the array.
*
* <p>
* The interpretations always start at the root node of the CART and continue
* until a final node is found. The final nodes have the following form:
*
* <pre>
* LEAF value
* </pre>
*
* <p>
* Where <code>value</code> represents the value of the node. Reaching a final
* node indicates the interpretation is over and the value of the node is the
* interpretation result.
*/
public class DecisionTree {
/** Logger instance. */
private static final Logger logger = Logger.getLogger(DecisionTree.class.getSimpleName());
/**
* Entry in file represents the total number of nodes in the file. This
* should be at the top of the file. The format should be "TOTAL n" where n
* is an integer value.
*/
final static String TOTAL = "TOTAL";
/**
* Entry in file represents a node. The format should be
* "NODE feat op val f" where 'feat' represents a feature, op represents an
* operand, val is the value, and f is the index of the node to go to is
* there isn't a match.
*/
final static String NODE = "NODE";
/**
* Entry in file represents a final node. The format should be "LEAF val"
* where val represents the value.
*/
final static String LEAF = "LEAF";
/**
* OPERAND_MATCHES
*/
final static String OPERAND_MATCHES = "MATCHES";
/**
* The CART. Entries can be DecisionNode or LeafNode. An ArrayList could be
* used here -- I chose not to because I thought it might be quicker to
* avoid dealing with the dynamic resizing.
*/
Node[] cart = null;
/**
* The number of nodes in the CART.
*/
transient int curNode = 0;
/**
* Creates a new CART by reading from the given URL.
*
* @param url the location of the CART data
*
* @throws IOException if errors occur while reading the data
*/
public DecisionTree(URL url) throws IOException {
BufferedReader reader;
String line;
reader = new BufferedReader(new InputStreamReader(url.openStream()));
line = reader.readLine();
while (line != null) {
if (!line.startsWith("***")) {
parseAndAdd(line);
}
line = reader.readLine();
}
reader.close();
}
/**
* Creates a new CART by reading from the given reader.
*
* @param reader the source of the CART data
* @param nodes the number of nodes to read for this cart
*
* @throws IOException if errors occur while reading the data
*/
public DecisionTree(BufferedReader reader, int nodes) throws IOException {
this(nodes);
String line;
for (int i = 0; i < nodes; i++) {
line = reader.readLine();
if (!line.startsWith("***")) {
parseAndAdd(line);
}
}
}
/**
* Creates a new CART that will be populated with nodes later.
*
* @param numNodes the number of nodes
*/
private DecisionTree(int numNodes) {
cart = new Node[numNodes];
}
/**
* Dump the CART tree as a dot file.
* <p>
* The dot tool is part of the graphviz distribution at <a
* href="http://www.graphviz.org/">http://www.graphviz.org/</a>. If
* installed, call it as "dot -O -Tpdf *.dot" from the console to generate
* pdfs.
* </p>
*
* @param out The PrintWriter to write to.
*/
public void dumpDot(PrintWriter out) {
out.write("digraph \"" + "CART Tree" + "\" {\n");
out.write("rankdir = LR\n");
for (Node n : cart) {
out.println("\tnode" + Math.abs(n.hashCode()) + " [ label=\""
+ n.toString() + "\", color=" + dumpDotNodeColor(n)
+ ", shape=" + dumpDotNodeShape(n) + " ]\n");
if (n instanceof DecisionNode) {
DecisionNode dn = (DecisionNode) n;
if (dn.qtrue < cart.length && cart[dn.qtrue] != null) {
out.write("\tnode" + Math.abs(n.hashCode()) + " -> node"
+ Math.abs(cart[dn.qtrue].hashCode())
+ " [ label=" + "TRUE" + " ]\n");
}
if (dn.qfalse < cart.length && cart[dn.qfalse] != null) {
out.write("\tnode" + Math.abs(n.hashCode()) + " -> node"
+ Math.abs(cart[dn.qfalse].hashCode())
+ " [ label=" + "FALSE" + " ]\n");
}
}
}
out.write("}\n");
out.close();
}
protected String dumpDotNodeColor(Node n) {
if (n instanceof LeafNode) {
return "green";
}
return "red";
}
protected String dumpDotNodeShape(Node n) {
return "box";
}
/**
* Creates a node from the given input line and add it to the CART. It
* expects the TOTAL line to come before any of the nodes.
*
* @param line a line of input to parse
*/
protected void parseAndAdd(String line) {
StringTokenizer tokenizer = new StringTokenizer(line, " ");
String type = tokenizer.nextToken();
if (type.equals(LEAF) || type.equals(NODE)) {
cart[curNode] = getNode(type, tokenizer, curNode);
cart[curNode].setCreationLine(line);
curNode++;
} else if (type.equals(TOTAL)) {
cart = new Node[Integer.parseInt(tokenizer.nextToken())];
curNode = 0;
} else {
throw new Error("Invalid CART type: " + type);
}
}
/**
* Gets the node based upon the type and tokenizer.
*
* @param type <code>NODE</code> or <code>LEAF</code>
* @param tokenizer the StringTokenizer containing the data to get
* @param currentNode the index of the current node we're looking at
*
* @return the node
*/
protected Node getNode(String type, StringTokenizer tokenizer,
int currentNode) {
if (type.equals(NODE)) {
String feature = tokenizer.nextToken();
String operand = tokenizer.nextToken();
Object value = parseValue(tokenizer.nextToken());
int qfalse = Integer.parseInt(tokenizer.nextToken());
if (operand.equals(OPERAND_MATCHES)) {
return new MatchingNode(feature, value.toString(),
currentNode + 1, qfalse);
} else {
return new ComparisonNode(feature, value, operand,
currentNode + 1, qfalse);
}
} else if (type.equals(LEAF)) {
return new LeafNode(parseValue(tokenizer.nextToken()));
}
return null;
}
/**
* Coerces a string into a value.
*
* @param string of the form "type(value)"; for example, "Float(2.3)"
*
* @return the value
*/
protected Object parseValue(String string) {
int openParen = string.indexOf("(");
String type = string.substring(0, openParen);
String value = string.substring(openParen + 1, string.length() - 1);
if (type.equals("String")) {
return value;
} else if (type.equals("Float")) {
return new Float(Float.parseFloat(value));
} else if (type.equals("Integer")) {
return new Integer(Integer.parseInt(value));
} else if (type.equals("List")) {
StringTokenizer tok = new StringTokenizer(value, ",");
int size = tok.countTokens();
int[] values = new int[size];
for (int i = 0; i < size; i++) {
float fval = Float.parseFloat(tok.nextToken());
values[i] = Math.round(fval);
}
return values;
} else {
throw new Error("Unknown type: " + type);
}
}
/**
* Passes the given item through this CART and returns the interpretation.
*
* @param item the item to analyze
*
* @return the interpretation
*/
public Object interpret(Item item) {
int nodeIndex = 0;
DecisionNode decision;
while (!(cart[nodeIndex] instanceof LeafNode)) {
decision = (DecisionNode) cart[nodeIndex];
nodeIndex = decision.getNextNode(item);
}
logger.fine("LEAF " + cart[nodeIndex].getValue());
return ((LeafNode) cart[nodeIndex]).getValue();
}
/**
* A node for the CART.
*/
static abstract class Node {
/**
* The value of this node.
*/
protected Object value;
/**
* Create a new Node with the given value.
*/
public Node(Object value) {
this.value = value;
}
/**
* Get the value.
*/
public Object getValue() {
return value;
}
/**
* Return a string representation of the type of the value.
*/
public String getValueString() {
if (value == null) {
return "NULL()";
} else if (value instanceof String) {
return "String(" + value.toString() + ")";
} else if (value instanceof Float) {
return "Float(" + value.toString() + ")";
} else if (value instanceof Integer) {
return "Integer(" + value.toString() + ")";
} else {
return value.getClass().toString() + "(" + value.toString()
+ ")";
}
}
/**
* sets the line of text used to create this node.
*
* @param line the creation line
*/
public void setCreationLine(String line) {}
}
/**
* A decision node that determines the next Node to go to in the CART.
*/
abstract static class DecisionNode extends Node {
/**
* The feature used to find a value from an Item.
*/
private PathExtractor path;
/**
* Index of Node to go to if the comparison doesn't match.
*/
protected int qfalse;
/**
* Index of Node to go to if the comparison matches.
*/
protected int qtrue;
/**
* The feature used to find a value from an Item.
*/
public String getFeature() {
return path.toString();
}
/**
* Find the feature associated with this DecisionNode and the given
* item
*
* @param item the item to start from
* @return the object representing the feature
*/
public Object findFeature(Item item) {
return path.findFeature(item);
}
/**
* Returns the next node based upon the descision determined at this
* node
*
* @param item the current item.
* @return the index of the next node
*/
public final int getNextNode(Item item) {
return getNextNode(findFeature(item));
}
/**
* Create a new DecisionNode.
*
* @param feature the string used to get a value from an Item
* @param value the value to compare to
* @param qtrue the Node index to go to if the comparison matches
* @param qfalse the Node machine index to go to upon no match
*/
public DecisionNode(String feature, Object value, int qtrue, int qfalse) {
super(value);
this.path = new PathExtractor(feature, true);
this.qtrue = qtrue;
this.qfalse = qfalse;
}
/**
* Get the next Node to go to in the CART. The return value is an index
* in the CART.
*/
abstract public int getNextNode(Object val);
}
/**
* A decision Node that compares two values.
*/
static class ComparisonNode extends DecisionNode {
/**
* LESS_THAN
*/
final static String LESS_THAN = "<";
/**
* EQUALS
*/
final static String EQUALS = "=";
/**
* GREATER_THAN
*/
final static String GREATER_THAN = ">";
/**
* The comparison type. One of LESS_THAN, GREATER_THAN, or EQUAL_TO.
*/
String comparisonType;
/**
* Create a new ComparisonNode with the given values.
*
* @param feature the string used to get a value from an Item
* @param value the value to compare to
* @param comparisonType one of LESS_THAN, EQUAL_TO, or GREATER_THAN
* @param qtrue the Node index to go to if the comparison matches
* @param qfalse the Node index to go to upon no match
*/
public ComparisonNode(String feature, Object value,
String comparisonType, int qtrue, int qfalse) {
super(feature, value, qtrue, qfalse);
if (!comparisonType.equals(LESS_THAN)
&& !comparisonType.equals(EQUALS)
&& !comparisonType.equals(GREATER_THAN)) {
throw new Error("Invalid comparison type: " + comparisonType);
} else {
this.comparisonType = comparisonType;
}
}
/**
* Compare the given value and return the appropriate Node index.
* IMPLEMENTATION NOTE: LESS_THAN and GREATER_THAN, the Node's value
* and the value passed in are converted to floating point values. For
* EQUAL, the Node's value and the value passed in are treated as
* String compares. This is the way of Flite, so be it Flite.
*
* @param val the value to compare
*/
public int getNextNode(Object val) {
boolean yes = false;
int ret;
if (comparisonType.equals(LESS_THAN)
|| comparisonType.equals(GREATER_THAN)) {
float cart_fval;
float fval;
if (value instanceof Float) {
cart_fval = ((Float) value).floatValue();
} else {
cart_fval = Float.parseFloat(value.toString());
}
if (val instanceof Float) {
fval = ((Float) val).floatValue();
} else {
fval = Float.parseFloat(val.toString());
}
if (comparisonType.equals(LESS_THAN)) {
yes = (fval < cart_fval);
} else {
yes = (fval > cart_fval);
}
} else { // comparisonType = "="
String sval = val.toString();
String cart_sval = value.toString();
yes = sval.equals(cart_sval);
}
if (yes) {
ret = qtrue;
} else {
ret = qfalse;
}
logger.fine(trace(val, yes, ret));
return ret;
}
private String trace(Object value, boolean match, int next) {
return "NODE " + getFeature() + " [" + value + "] "
+ comparisonType + " [" + getValue() + "] "
+ (match ? "Yes" : "No") + " next " + next;
}
/**
* Get a string representation of this Node.
*/
public String toString() {
return "NODE " + getFeature() + " " + comparisonType + " "
+ getValueString() + " " + Integer.toString(qtrue) + " "
+ Integer.toString(qfalse);
}
}
/**
* A Node that checks for a regular expression match.
*/
static class MatchingNode extends DecisionNode {
Pattern pattern;
/**
* Create a new MatchingNode with the given values.
*
* @param feature the string used to get a value from an Item
* @param regex the regular expression
* @param qtrue the Node index to go to if the comparison matches
* @param qfalse the Node index to go to upon no match
*/
public MatchingNode(String feature, String regex, int qtrue, int qfalse) {
super(feature, regex, qtrue, qfalse);
this.pattern = Pattern.compile(regex);
}
/**
* Compare the given value and return the appropriate CART index.
*
* @param val the value to compare -- this must be a String
*/
public int getNextNode(Object val) {
return pattern.matcher((String) val).matches() ? qtrue : qfalse;
}
/**
* Get a string representation of this Node.
*/
public String toString() {
StringBuffer buf =
new StringBuffer(NODE + " " + getFeature() + " "
+ OPERAND_MATCHES);
buf.append(getValueString() + " ");
buf.append(Integer.toString(qtrue) + " ");
buf.append(Integer.toString(qfalse));
return buf.toString();
}
}
/**
* The final Node of a CART. This just a marker class.
*/
static class LeafNode extends Node {
/**
* Create a new LeafNode with the given value.
*
* @param the value of this LeafNode
*/
public LeafNode(Object value) {
super(value);
}
/**
* Get a string representation of this Node.
*/
public String toString() {
return "LEAF " + getValueString();
}
}
}

View file

@ -1,145 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.text.DecimalFormat;
import java.util.LinkedHashMap;
import java.util.Map;
/**
* Implementation of the FeatureSet interface.
*/
public class FeatureSet {
private final Map<String, Object> featureMap;
static DecimalFormat formatter;
/**
* Creates a new empty feature set
*/
public FeatureSet() {
featureMap = new LinkedHashMap<String, Object>();
}
/**
* Determines if the given feature is present.
*
* @param name the name of the feature of interest
*
* @return true if the named feature is present
*/
public boolean isPresent(String name) {
return featureMap.containsKey(name);
}
/**
* Removes the named feature from this set of features.
*
* @param name the name of the feature of interest
*/
public void remove(String name) {
featureMap.remove(name);
}
/**
* Convenience method that returns the named feature as a string.
*
* @param name the name of the feature
*
* @return the value associated with the name or null if the value is not
* found
*
* @throws ClassCastException if the associated value is not a String
*/
public String getString(String name) {
return (String) getObject(name);
}
/**
* Convenience method that returns the named feature as a int.
*
* @param name the name of the feature
*
* @return the value associated with the name or null if the value is not
* found
*
* @throws ClassCastException if the associated value is not an int.
*/
public int getInt(String name) {
return ((Integer) getObject(name)).intValue();
}
/**
* Convenience method that returns the named feature as a float.
*
* @param name the name of the feature
*
* @return the value associated with the name or null if the value is not
* found.
*
* @throws ClassCastException if the associated value is not a float
*/
public float getFloat(String name) {
return ((Float) getObject(name)).floatValue();
}
/**
* Returns the named feature as an object.
*
* @param name the name of the feature
*
* @return the value associated with the name or null if the value is not
* found
*/
public Object getObject(String name) {
return featureMap.get(name);
}
/**
* Convenience method that sets the named feature as a int.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setInt(String name, int value) {
setObject(name, new Integer(value));
}
/**
* Convenience method that sets the named feature as a float.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setFloat(String name, float value) {
setObject(name, new Float(value));
}
/**
* Convenience method that sets the named feature as a String.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setString(String name, String value) {
setObject(name, value);
}
/**
* Sets the named feature.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setObject(String name, Object value) {
featureMap.put(name, value);
}
}

View file

@ -1,447 +0,0 @@
/**
* Portions Copyright 2001-2003 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.util.StringTokenizer;
/**
* Represents a node in a Relation. Items can have shared contents but each
* item has its own set of Daughters. The shared contents of an item
* (represented by ItemContents) includes the feature set for the item and the
* set of all relations that this item is contained in. An item can be
* contained in a number of relations and as daughters to other items. This
* class is used to keep track of all of these relationships. There may be many
* instances of item that reference the same shared ItemContents.
*/
public class Item {
private Relation ownerRelation;
private ItemContents contents;
private Item parent;
private Item daughter;
private Item next;
private Item prev;
/**
* Creates an item. The item is coupled to a particular Relation. If shared
* contents is null a new sharedContents is created.
*
* @param relation the relation that owns this item
* @param sharedContents the contents that is shared with others. If null,
* a new sharedContents is created.
*/
public Item(Relation relation, ItemContents sharedContents) {
ownerRelation = relation;
if (sharedContents != null) {
contents = sharedContents;
} else {
contents = new ItemContents();
}
parent = null;
daughter = null;
next = null;
prev = null;
getSharedContents().addItemRelation(relation.getName(), this);
}
/**
* Finds the item in the given relation that has the same shared contents.
*
* @param relationName the relation of interest
*
* @return the item as found in the given relation or null if not found
*/
public Item getItemAs(String relationName) {
return getSharedContents().getItemRelation(relationName);
}
/**
* Retrieves the owning Relation.
*
* @return the relation that owns this item
*/
public Relation getOwnerRelation() {
return ownerRelation;
}
/**
* Retrieves the shared contents for this item.
*
* @return the shared item contents
*/
public ItemContents getSharedContents() {
return contents;
}
/**
* Determines if this item has daughters.
*
* @return true if this item has daughters
*/
public boolean hasDaughters() {
return daughter != null;
}
/**
* Retrieves the first daughter of this item.
*
* @return the first daughter or null if none
*/
public Item getDaughter() {
return daughter;
}
/**
* Retrieves the Nth daughter of this item.
*
* @param which the index of the daughter to return
*
* @return the Nth daughter or null if none at the given index
*/
public Item getNthDaughter(int which) {
Item d = daughter;
int count = 0;
while (count++ != which && d != null) {
d = d.next;
}
return d;
}
/**
* Retrieves the last daughter of this item.
*
* @return the last daughter or null if none at the given index
*/
public Item getLastDaughter() {
Item d = daughter;
if (d == null) {
return null;
}
while (d.next != null) {
d = d.next;
}
return d;
}
/**
* Adds the given item as a daughter to this item.
*
* @param item for the new daughter
* @return created item
*/
public Item addDaughter(Item item) {
Item newItem;
ItemContents contents;
Item p = getLastDaughter();
if (p != null) {
newItem = p.appendItem(item);
} else {
if (item == null) {
contents = new ItemContents();
} else {
contents = item.getSharedContents();
}
newItem = new Item(getOwnerRelation(), contents);
newItem.parent = this;
daughter = newItem;
}
return newItem;
}
/**
* Creates a new Item, adds it as a daughter to this item and returns the
* new item.
*
* @return the newly created item that was added as a daughter
*/
public Item createDaughter() {
return addDaughter(null);
}
/**
* Returns the parent of this item.
*
* @return the parent of this item
*/
public Item getParent() {
Item n;
for (n = this; n.prev != null; n = n.prev) {
}
return n.parent;
}
/**
* Sets the parent of this item.
*
* @param parent the parent of this item
*/
/*
* private void setParent(Item parent) { this.parent = parent; }
*/
/**
* Returns the utterance associated with this item.
*
* @return the utterance that contains this item
*/
public Utterance getUtterance() {
return getOwnerRelation().getUtterance();
}
/**
* Returns the feature set of this item.
*
* @return the feature set of this item
*/
public FeatureSet getFeatures() {
return getSharedContents().getFeatures();
}
/**
* Finds the feature by following the given path. Path is a string of ":"
* or "." separated strings with the following interpretations:
* <ul>
* <li>n - next item
* <li>p - previous item
* <li>parent - the parent
* <li>daughter - the daughter
* <li>daughter1 - same as daughter
* <li>daughtern - the last daughter
* <li>R:relname - the item as found in the given relation 'relname'
* </ul>
* The last element of the path will be interpreted as a voice/language
* specific feature function (if present) or an item feature name. If the
* feature function exists it will be called with the item specified by the
* path, otherwise, a feature will be retrieved with the given name. If
* neither exist than a String "0" is returned.
*
* @param pathAndFeature the path to follow
* @return created object
*/
public Object findFeature(String pathAndFeature) {
int lastDot;
String feature;
String path;
Item item;
Object results = null;
lastDot = pathAndFeature.lastIndexOf(".");
// string can be of the form "p.feature" or just "feature"
if (lastDot == -1) {
feature = pathAndFeature;
path = null;
} else {
feature = pathAndFeature.substring(lastDot + 1);
path = pathAndFeature.substring(0, lastDot);
}
item = findItem(path);
if (item != null) {
results = item.getFeatures().getObject(feature);
}
results = (results == null) ? "0" : results;
// System.out.println("FI " + pathAndFeature + " are " + results);
return results;
}
/**
* Finds the item specified by the given path.
*
* Path is a string of ":" or "." separated strings with the following
* interpretations:
* <ul>
* <li>n - next item
* <li>p - previous item
* <li>parent - the parent
* <li>daughter - the daughter
* <li>daughter1 - same as daughter
* <li>daughtern - the last daughter
* <li>R:relname - the item as found in the given relation 'relname'
* </ul>
* If the given path takes us outside of the bounds of the item graph, then
* list access exceptions will be thrown.
*
* @param path the path to follow
*
* @return the item at the given path
*/
public Item findItem(String path) {
Item pitem = this;
StringTokenizer tok;
if (path == null) {
return this;
}
tok = new StringTokenizer(path, ":.");
while (pitem != null && tok.hasMoreTokens()) {
String token = tok.nextToken();
if (token.equals("n")) {
pitem = pitem.getNext();
} else if (token.equals("p")) {
pitem = pitem.getPrevious();
} else if (token.equals("nn")) {
pitem = pitem.getNext();
if (pitem != null) {
pitem = pitem.getNext();
}
} else if (token.equals("pp")) {
pitem = pitem.getPrevious();
if (pitem != null) {
pitem = pitem.getPrevious();
}
} else if (token.equals("parent")) {
pitem = pitem.getParent();
} else if (token.equals("daughter") || token.equals("daughter1")) {
pitem = pitem.getDaughter();
} else if (token.equals("daughtern")) {
pitem = pitem.getLastDaughter();
} else if (token.equals("R")) {
String relationName = tok.nextToken();
pitem =
pitem.getSharedContents()
.getItemRelation(relationName);
} else {
System.out.println("findItem: bad feature " + token + " in "
+ path);
}
}
return pitem;
}
/**
* Gets the next item in this list.
*
* @return the next item or null
*/
public Item getNext() {
return next;
}
/**
* Gets the previous item in this list.
*
* @return the previous item or null
*/
public Item getPrevious() {
return prev;
}
/**
* Appends an item in this list after this item.
*
* @param originalItem new item has shared contents with this item (or *
* null)
*
* @return the newly appended item
*/
public Item appendItem(Item originalItem) {
ItemContents contents;
Item newItem;
if (originalItem == null) {
contents = null;
} else {
contents = originalItem.getSharedContents();
}
newItem = new Item(getOwnerRelation(), contents);
newItem.next = this.next;
if (this.next != null) {
this.next.prev = newItem;
}
attach(newItem);
if (this.ownerRelation.getTail() == this) {
this.ownerRelation.setTail(newItem);
}
return newItem;
}
/**
* Attaches/appends an item to this one.
*
* @param item the item to append
*/
void attach(Item item) {
this.next = item;
item.prev = this;
}
/**
* Prepends an item in this list before this item.
*
* @param originalItem new item has shared contents with this item (or *
* null)
*
* @return the newly appended item
*/
public Item prependItem(Item originalItem) {
ItemContents contents;
Item newItem;
if (originalItem == null) {
contents = null;
} else {
contents = originalItem.getSharedContents();
}
newItem = new Item(getOwnerRelation(), contents);
newItem.prev = this.prev;
if (this.prev != null) {
this.prev.next = newItem;
}
newItem.next = this;
this.prev = newItem;
if (this.parent != null) {
this.parent.daughter = newItem;
newItem.parent = this.parent;
this.parent = null;
}
if (this.ownerRelation.getHead() == this) {
this.ownerRelation.setHead(newItem);
}
return newItem;
}
// Inherited from object
public String toString() {
// if we have a feature called 'name' use that
// otherwise fall back on the default.
String name = getFeatures().getString("name");
if (name == null) {
name = "";
}
return name;
}
/**
* Determines if the shared contents of the two items are the same.
*
* @param otherItem the item to compare
*
* @return true if the shared contents are the same
*/
public boolean equalsShared(Item otherItem) {
if (otherItem == null) {
return false;
} else {
return getSharedContents().equals(otherItem.getSharedContents());
}
}
}

View file

@ -1,74 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
/**
* Contains the information that is shared between multiple items.
*/
public class ItemContents {
private FeatureSet features;
private FeatureSet relations;
/**
* Class Constructor.
*/
public ItemContents() {
features = new FeatureSet();
relations = new FeatureSet();
}
/**
* Adds the given item to the set of relations. Whenever an item is added
* to a relation, it should add the name and the Item reference to this set
* of name/item mappings. This allows an item to find out the set of all
* relations that it is contained in.
*
* @param relationName the name of the relation
* @param item the item reference in the relation
*/
public void addItemRelation(String relationName, Item item) {
// System.out.println("AddItemRelation: " + relationName
// + " item: " + item);
relations.setObject(relationName, item);
}
/**
* Removes the relation/item mapping from this ItemContents.
*
* @param relationName the name of the relation/item to remove
*/
public void removeItemRelation(String relationName) {
relations.remove(relationName);
}
/**
* Given the name of a relation, returns the item the shares the same
* ItemContents.
*
* @param relationName the name of the relation of interest
*
* @return the item associated with this ItemContents in the named
* relation, or null if it does not exist
*/
public Item getItemRelation(String relationName) {
return (Item) relations.getObject(relationName);
}
/**
* Returns the feature set for this item contents.
*
* @return the FeatureSet for this contents
*/
public FeatureSet getFeatures() {
return features;
}
}

View file

@ -1,449 +0,0 @@
/**
* Portions Copyright 2001-2003 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
/**
* Expands Strings containing digits characters into a list of words
* representing those digits.
*
* It translates the following code from flite:
* <code>lang/usEnglish/us_expand.c</code>
*/
public class NumberExpander {
private static final String[] digit2num = {"zero", "one", "two", "three",
"four", "five", "six", "seven", "eight", "nine"};
private static final String[] digit2teen = {"ten", /* shouldn't get called */
"eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen",
"seventeen", "eighteen", "nineteen"};
private static final String[] digit2enty = {"zero", /* shouldn't get called */
"ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty",
"ninety"};
private static final String[] ord2num = {"zeroth", "first", "second",
"third", "fourth", "fifth", "sixth", "seventh", "eighth", "ninth"};
private static final String[] ord2teen = {"tenth", /* shouldn't get called */
"eleventh", "twelfth", "thirteenth", "fourteenth", "fifteenth",
"sixteenth", "seventeenth", "eighteenth", "nineteenth"};
private static final String[] ord2enty = {"zeroth", /* shouldn't get called */
"tenth", "twentieth", "thirtieth", "fortieth", "fiftieth", "sixtieth",
"seventieth", "eightieth", "ninetieth"};
private static String[] digit2Numness = {
"", "tens", "twenties", "thirties", "fourties", "fifties",
"sixties", "seventies", "eighties", "nineties"
};
/**
* Unconstructable
*/
private NumberExpander() {}
/**
* Expands a digit string into a list of English words of those digits. For
* example, "1234" expands to "one two three four"
*
* @param numberString the digit string to expand.
* @param wordRelation words are added to this Relation
*/
public static void expandNumber(String numberString,
WordRelation wordRelation) {
int numDigits = numberString.length();
if (numDigits == 0) {
// wordRelation = null;
} else if (numDigits == 1) {
expandDigits(numberString, wordRelation);
} else if (numDigits == 2) {
expand2DigitNumber(numberString, wordRelation);
} else if (numDigits == 3) {
expand3DigitNumber(numberString, wordRelation);
} else if (numDigits < 7) {
expandBelow7DigitNumber(numberString, wordRelation);
} else if (numDigits < 10) {
expandBelow10DigitNumber(numberString, wordRelation);
} else if (numDigits < 13) {
expandBelow13DigitNumber(numberString, wordRelation);
} else {
expandDigits(numberString, wordRelation);
}
}
/**
* Expands a two-digit string into a list of English words.
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
private static void expand2DigitNumber(String numberString,
WordRelation wordRelation) {
if (numberString.charAt(0) == '0') {
// numberString is "0X"
if (numberString.charAt(1) == '0') {
// numberString is "00", do nothing
} else {
// numberString is "01", "02" ...
String number = digit2num[numberString.charAt(1) - '0'];
wordRelation.addWord(number);
}
} else if (numberString.charAt(1) == '0') {
// numberString is "10", "20", ...
String number = digit2enty[numberString.charAt(0) - '0'];
wordRelation.addWord(number);
} else if (numberString.charAt(0) == '1') {
// numberString is "11", "12", ..., "19"
String number = digit2teen[numberString.charAt(1) - '0'];
wordRelation.addWord(number);
} else {
// numberString is "2X", "3X", ...
String enty = digit2enty[numberString.charAt(0) - '0'];
wordRelation.addWord(enty);
expandDigits(numberString.substring(1, numberString.length()),
wordRelation);
}
}
/**
* Expands a three-digit string into a list of English words.
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
private static void expand3DigitNumber(String numberString,
WordRelation wordRelation) {
if (numberString.charAt(0) == '0') {
expandNumberAt(numberString, 1, wordRelation);
} else {
String hundredDigit = digit2num[numberString.charAt(0) - '0'];
wordRelation.addWord(hundredDigit);
wordRelation.addWord("hundred");
expandNumberAt(numberString, 1, wordRelation);
}
}
/**
* Expands a string that is a 4 to 6 digits number into a list of English
* words. For example, "333000" into "three hundred and thirty-three
* thousand".
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
private static void expandBelow7DigitNumber(String numberString,
WordRelation wordRelation) {
expandLargeNumber(numberString, "thousand", 3, wordRelation);
}
/**
* Expands a string that is a 7 to 9 digits number into a list of English
* words. For example, "19000000" into nineteen million.
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
private static void expandBelow10DigitNumber(String numberString,
WordRelation wordRelation) {
expandLargeNumber(numberString, "million", 6, wordRelation);
}
/**
* Expands a string that is a 10 to 12 digits number into a list of English
* words. For example, "27000000000" into twenty-seven billion.
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
private static void expandBelow13DigitNumber(String numberString,
WordRelation wordRelation) {
expandLargeNumber(numberString, "billion", 9, wordRelation);
}
/**
* Expands a string that is a number longer than 3 digits into a list of
* English words. For example, "1000" into one thousand.
*
* @param numberString the string which is the number to expand
* @param order either "thousand", "million", or "billion"
* @param numberZeroes the number of zeroes, depending on the order, so its
* either 3, 6, or 9
* @param wordRelation words are added to this Relation
*/
private static void expandLargeNumber(String numberString, String order,
int numberZeroes, WordRelation wordRelation) {
int numberDigits = numberString.length();
// parse out the prefix, e.g., "113" in "113,000"
int i = numberDigits - numberZeroes;
String part = numberString.substring(0, i);
// get how many thousands/millions/billions
Item oldTail = wordRelation.getTail();
expandNumber(part, wordRelation);
if (wordRelation.getTail() != oldTail) {
wordRelation.addWord(order);
}
expandNumberAt(numberString, i, wordRelation);
}
/**
* Returns the number string list of the given string starting at the given
* index. E.g., expandNumberAt("1100", 1) gives "one hundred"
*
* @param numberString the string which is the number to expand
* @param startIndex the starting position
* @param wordRelation words are added to this Relation
*/
private static void expandNumberAt(String numberString, int startIndex,
WordRelation wordRelation) {
expandNumber(
numberString.substring(startIndex, numberString.length()),
wordRelation);
}
/**
* Expands given token to list of words pronouncing it as digits
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
public static void expandDigits(String numberString,
WordRelation wordRelation) {
int numberDigits = numberString.length();
for (int i = 0; i < numberDigits; i++) {
char digit = numberString.charAt(i);
if (Character.isDigit(digit)) {
wordRelation.addWord(digit2num[numberString.charAt(i) - '0']);
} else {
wordRelation.addWord("umpty");
}
}
}
/**
* Expands the digit string of an ordinal number.
*
* @param rawNumberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
public static void expandOrdinal(String rawNumberString,
WordRelation wordRelation) {
// remove all ','s from the raw number string
expandNumber(rawNumberString.replace(",", ""), wordRelation);
// get the last in the list of number strings
Item lastItem = wordRelation.getTail();
if (lastItem != null) {
FeatureSet featureSet = lastItem.getFeatures();
String lastNumber = featureSet.getString("name");
String ordinal = findMatchInArray(lastNumber, digit2num, ord2num);
if (ordinal == null) {
ordinal = findMatchInArray(lastNumber, digit2teen, ord2teen);
}
if (ordinal == null) {
ordinal = findMatchInArray(lastNumber, digit2enty, ord2enty);
}
if (lastNumber.equals("hundred")) {
ordinal = "hundredth";
} else if (lastNumber.equals("thousand")) {
ordinal = "thousandth";
} else if (lastNumber.equals("billion")) {
ordinal = "billionth";
}
// if there was an ordinal, set the last element of the list
// to that ordinal; otherwise, don't do anything
if (ordinal != null) {
wordRelation.setLastWord(ordinal);
}
}
}
public static void expandNumess(String rawString, WordRelation wordRelation) {
if (rawString.length() == 4) {
expand2DigitNumber(rawString.substring(0, 2), wordRelation);
expandNumess(rawString.substring(2), wordRelation);
} else {
wordRelation.addWord(digit2Numness[rawString.charAt(0) - '0']);
}
}
/**
* Finds a match of the given string in the given array, and returns the
* element at the same index in the returnInArray
*
* @param strToMatch the string to match
* @param matchInArray the source array
* @param returnInArray the return array
*
* @return an element in returnInArray, or <code>null</code> if a match is
* not found
*/
private static String findMatchInArray(String strToMatch,
String[] matchInArray, String[] returnInArray) {
for (int i = 0; i < matchInArray.length; i++) {
if (strToMatch.equals(matchInArray[i])) {
if (i < returnInArray.length) {
return returnInArray[i];
} else {
return null;
}
}
}
return null;
}
/**
* Expands the given number string as pairs as in years or IDs
*
* @param numberString the string which is the number to expand
* @param wordRelation words are added to this Relation
*/
public static void expandID(String numberString, WordRelation wordRelation) {
int numberDigits = numberString.length();
if ((numberDigits == 4) && (numberString.charAt(2) == '0')
&& (numberString.charAt(3) == '0')) {
if (numberString.charAt(1) == '0') { // e.g. 2000, 3000
expandNumber(numberString, wordRelation);
} else {
expandNumber(numberString.substring(0, 2), wordRelation);
wordRelation.addWord("hundred");
}
} else if ((numberDigits == 2) && (numberString.charAt(0) == '0')) {
wordRelation.addWord("oh");
expandDigits(numberString.substring(1, 2), wordRelation);
} else if ((numberDigits == 4 && numberString.charAt(1) == '0')
|| numberDigits < 3) {
expandNumber(numberString, wordRelation);
} else if (numberDigits % 2 == 1) {
String firstDigit = digit2num[numberString.charAt(0) - '0'];
wordRelation.addWord(firstDigit);
expandID(numberString.substring(1, numberDigits), wordRelation);
} else {
expandNumber(numberString.substring(0, 2), wordRelation);
expandID(numberString.substring(2, numberDigits), wordRelation);
}
}
/**
* Expands the given number string as a real number.
*
* @param numberString the string which is the real number to expand
* @param wordRelation words are added to this Relation
*/
public static void expandReal(String numberString,
WordRelation wordRelation) {
int stringLength = numberString.length();
int position;
if (numberString.charAt(0) == '-') {
// negative real numbers
wordRelation.addWord("minus");
expandReal(numberString.substring(1, stringLength), wordRelation);
} else if (numberString.charAt(0) == '+') {
// prefixed with a '+'
wordRelation.addWord("plus");
expandReal(numberString.substring(1, stringLength), wordRelation);
} else if ((position = numberString.indexOf('e')) != -1
|| (position = numberString.indexOf('E')) != -1) {
// numbers with 'E' or 'e'
expandReal(numberString.substring(0, position), wordRelation);
wordRelation.addWord("e");
expandReal(numberString.substring(position + 1), wordRelation);
} else if ((position = numberString.indexOf('.')) != -1) {
// numbers with '.'
String beforeDot = numberString.substring(0, position);
if (beforeDot.length() > 0) {
expandReal(beforeDot, wordRelation);
}
wordRelation.addWord("point");
String afterDot = numberString.substring(position + 1);
if (afterDot.length() > 0) {
expandDigits(afterDot, wordRelation);
}
} else {
// everything else
expandNumber(numberString, wordRelation);
}
}
/**
* Expands the given string of letters as a list of single char symbols.
*
* @param letters the string of letters to expand
* @param wordRelation words are added to this Relation
*/
public static void expandLetters(String letters, WordRelation wordRelation) {
letters = letters.toLowerCase();
char c;
for (int i = 0; i < letters.length(); i++) {
// if this is a number
c = letters.charAt(i);
if (Character.isDigit(c)) {
wordRelation.addWord(digit2num[c - '0']);
} else if (letters.equals("a")) {
wordRelation.addWord("_a");
} else {
wordRelation.addWord(String.valueOf(c));
}
}
}
/**
* Returns the integer value of the given string of Roman numerals.
*
* @param roman the string of Roman numbers
*
* @return the integer value
*/
public static int expandRoman(String roman) {
int value = 0;
for (int p = 0; p < roman.length(); p++) {
char c = roman.charAt(p);
if (c == 'X') {
value += 10;
} else if (c == 'V') {
value += 5;
} else if (c == 'I') {
if (p + 1 < roman.length()) {
char p1 = roman.charAt(p + 1);
if (p1 == 'V') {
value += 4;
p++;
} else if (p1 == 'X') {
value += 9;
p++;
} else {
value += 1;
}
} else {
value += 1;
}
}
}
return value;
}
}

View file

@ -1,264 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.StringTokenizer;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Interface that Manages a feature or item path. Allows navigation to the
* corresponding feature or item. This class in controlled by the following
* system properties:
*
* <pre>
* com.sun.speech.freetts.interpretCartPaths - default false
* com.sun.speech.freetts.lazyCartCompile - default true
* </pre>
*
* com.sun.speech.freetts.interpretCartPaths
*
* Instances of this class will optionally pre-compile the paths. Pre-compiling
* paths reduces the processing time and objects needed to extract a feature or
* an item based upon a path.
*/
public class PathExtractor {
/** Logger instance. */
private static final Logger LOGGER = Logger
.getLogger(PathExtractor.class.getName());
/**
* If this system property is set to true, paths will not be compiled.
*/
public final static String INTERPRET_PATHS_PROPERTY =
"com.sun.speech.freetts.interpretCartPaths";
/**
* If this system property is set to true, CART feature/item paths will
* only be compiled as needed.
*/
public final static String LAZY_COMPILE_PROPERTY =
"com.sun.speech.freetts.lazyCartCompile";
private final static boolean INTERPRET_PATHS = System.getProperty(
INTERPRET_PATHS_PROPERTY, "false").equals("true");
private final static boolean LAZY_COMPILE = System.getProperty(
LAZY_COMPILE_PROPERTY, "true").equals("true");
private String pathAndFeature;
private String path;
private String feature;
private Object[] compiledPath;
/**
* Creates a path for the given feature.
* @param pathAndFeature string to use
* @param wantFeature do we need features
*/
public PathExtractor(String pathAndFeature, boolean wantFeature) {
this.pathAndFeature = pathAndFeature;
if (INTERPRET_PATHS) {
path = pathAndFeature;
return;
}
if (wantFeature) {
int lastDot = pathAndFeature.lastIndexOf(".");
// string can be of the form "p.feature" or just "feature"
if (lastDot == -1) {
feature = pathAndFeature;
path = null;
} else {
feature = pathAndFeature.substring(lastDot + 1);
path = pathAndFeature.substring(0, lastDot);
}
} else {
this.path = pathAndFeature;
}
if (!LAZY_COMPILE) {
compiledPath = compile(path);
}
}
/**
* Finds the item associated with this Path.
*
* @param item the item to start at
* @return the item associated with the path or null
*/
public Item findItem(Item item) {
if (INTERPRET_PATHS) {
return item.findItem(path);
}
if (compiledPath == null) {
compiledPath = compile(path);
}
Item pitem = item;
for (int i = 0; pitem != null && i < compiledPath.length;) {
OpEnum op = (OpEnum) compiledPath[i++];
if (op == OpEnum.NEXT) {
pitem = pitem.getNext();
} else if (op == OpEnum.PREV) {
pitem = pitem.getPrevious();
} else if (op == OpEnum.NEXT_NEXT) {
pitem = pitem.getNext();
if (pitem != null) {
pitem = pitem.getNext();
}
} else if (op == OpEnum.PREV_PREV) {
pitem = pitem.getPrevious();
if (pitem != null) {
pitem = pitem.getPrevious();
}
} else if (op == OpEnum.PARENT) {
pitem = pitem.getParent();
} else if (op == OpEnum.DAUGHTER) {
pitem = pitem.getDaughter();
} else if (op == OpEnum.LAST_DAUGHTER) {
pitem = pitem.getLastDaughter();
} else if (op == OpEnum.RELATION) {
String relationName = (String) compiledPath[i++];
pitem =
pitem.getSharedContents()
.getItemRelation(relationName);
} else {
System.out.println("findItem: bad feature " + op + " in "
+ path);
}
}
return pitem;
}
/**
* Finds the feature associated with this Path.
*
* @param item the item to start at
* @return the feature associated or "0" if the feature was not found.
*/
public Object findFeature(Item item) {
if (INTERPRET_PATHS) {
return item.findFeature(path);
}
Item pitem = findItem(item);
Object results = null;
if (pitem != null) {
if (LOGGER.isLoggable(Level.FINER)) {
LOGGER.finer("findFeature: Item [" + pitem + "], feature '"
+ feature + "'");
}
results = pitem.getFeatures().getObject(feature);
}
results = (results == null) ? "0" : results;
if (LOGGER.isLoggable(Level.FINER)) {
LOGGER.finer("findFeature: ...results = '" + results + "'");
}
return results;
}
/**
* Compiles the given path into the compiled form
*
* @param path the path to compile
* @return the compiled form which is in the form of an array path
* traversal enums and associated strings
*/
private Object[] compile(String path) {
if (path == null) {
return new Object[0];
}
List<Object> list = new ArrayList<Object>();
StringTokenizer tok = new StringTokenizer(path, ":.");
while (tok.hasMoreTokens()) {
String token = tok.nextToken();
OpEnum op = OpEnum.getInstance(token);
if (op == null) {
throw new Error("Bad path compiled " + path);
}
list.add(op);
if (op == OpEnum.RELATION) {
list.add(tok.nextToken());
}
}
return list.toArray();
}
// inherited for Object
public String toString() {
return pathAndFeature;
}
// TODO: add these to the interface should we support binary
// files
/*
* public void writeBinary(); public void readBinary();
*/
}
/**
* An enumerated type associated with path operations.
*/
class OpEnum {
static private Map<String, OpEnum> map = new HashMap<String, OpEnum>();
public final static OpEnum NEXT = new OpEnum("n");
public final static OpEnum PREV = new OpEnum("p");
public final static OpEnum NEXT_NEXT = new OpEnum("nn");
public final static OpEnum PREV_PREV = new OpEnum("pp");
public final static OpEnum PARENT = new OpEnum("parent");
public final static OpEnum DAUGHTER = new OpEnum("daughter");
public final static OpEnum LAST_DAUGHTER = new OpEnum("daughtern");
public final static OpEnum RELATION = new OpEnum("R");
private String name;
/**
* Creates a new OpEnum.. There is a limited set of OpEnums
*
* @param name the path name for this Enum
*/
private OpEnum(String name) {
this.name = name;
map.put(name, this);
}
/**
* gets an OpEnum thats associated with the given name.
*
* @param name the name of the OpEnum of interest
*/
public static OpEnum getInstance(String name) {
return (OpEnum) map.get(name);
}
// inherited from Object
public String toString() {
return name;
}
}

View file

@ -1,29 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.io.IOException;
import java.net.URL;
/**
* Implements a finite state machine that checks if a given string is a prefix.
*/
public class PrefixFSM extends PronounceableFSM {
/**
* Constructs a PrefixFSM.
* @param url of the fsm
* @throws IOException if load failed
*/
public PrefixFSM(URL url) throws IOException {
super(url, true);
}
}

View file

@ -1,172 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URL;
import java.util.StringTokenizer;
/**
* Implements a finite state machine that checks if a given string is
* pronounceable. If it is pronounceable, the method <code>accept()</code> will
* return true.
*/
public class PronounceableFSM {
private static final String VOCAB_SIZE = "VOCAB_SIZE";
private static final String NUM_OF_TRANSITIONS = "NUM_OF_TRANSITIONS";
private static final String TRANSITIONS = "TRANSITIONS";
/**
* The vocabulary size.
*/
protected int vocabularySize;
/**
* The transitions of this FSM
*/
protected int[] transitions;
/**
* Whether we should scan the input string from the front.
*/
protected boolean scanFromFront;
/**
* Constructs a PronounceableFSM with information in the given URL.
*
* @param url the URL that contains the FSM specification
* @param scanFromFront indicates whether this FSM should scan the input
* string from the front, or from the back
* @throws IOException if something went wrong
*/
public PronounceableFSM(URL url, boolean scanFromFront) throws IOException {
this.scanFromFront = scanFromFront;
InputStream is = url.openStream();
loadText(is);
is.close();
}
/**
* Constructs a PronounceableFSM with the given attributes.
*
* @param vocabularySize the vocabulary size of the FSM
* @param transitions the transitions of the FSM
* @param scanFromFront indicates whether this FSM should scan the input
* string from the front, or from the back
*/
public PronounceableFSM(int vocabularySize, int[] transitions,
boolean scanFromFront) {
this.vocabularySize = vocabularySize;
this.transitions = transitions;
this.scanFromFront = scanFromFront;
}
/**
* Loads the ASCII specification of this FSM from the given InputStream.
*
* @param is the input stream to load from
*
* @throws IOException if an error occurs on input.
*/
private void loadText(InputStream is) throws IOException {
BufferedReader reader = new BufferedReader(new InputStreamReader(is));
String line = null;
while ((line = reader.readLine()) != null) {
if (!line.startsWith("***")) {
if (line.startsWith(VOCAB_SIZE)) {
vocabularySize = parseLastInt(line);
} else if (line.startsWith(NUM_OF_TRANSITIONS)) {
int transitionsSize = parseLastInt(line);
transitions = new int[transitionsSize];
} else if (line.startsWith(TRANSITIONS)) {
StringTokenizer st = new StringTokenizer(line);
String transition = st.nextToken();
int i = 0;
while (st.hasMoreTokens() && i < transitions.length) {
transition = st.nextToken().trim();
transitions[i++] = Integer.parseInt(transition);
}
}
}
}
reader.close();
}
/**
* Returns the integer value of the last integer in the given string.
*
* @param line the line to parse the integer from
*
* @return an integer
*/
private int parseLastInt(String line) {
String lastInt = line.trim().substring(line.lastIndexOf(" "));
return Integer.parseInt(lastInt.trim());
}
/**
* Causes this FSM to transition to the next state given the current state
* and input symbol.
*
* @param state the current state
* @param symbol the input symbol
*/
private int transition(int state, int symbol) {
for (int i = state; i < transitions.length; i++) {
if ((transitions[i] % vocabularySize) == symbol) {
return (transitions[i] / vocabularySize);
}
}
return -1;
}
/**
* Checks to see if this finite state machine accepts the given input
* string.
*
* @param inputString the input string to be tested
*
* @return true if this FSM accepts, false if it rejects
*/
public boolean accept(String inputString) {
int symbol;
int state = transition(0, '#');
int leftEnd = inputString.length() - 1;
int start = (scanFromFront) ? 0 : leftEnd;
for (int i = start; 0 <= i && i <= leftEnd;) {
char c = inputString.charAt(i);
if (c == 'n' || c == 'm') {
symbol = 'N';
} else if ("aeiouy".indexOf(c) != -1) {
symbol = 'V';
} else {
symbol = c;
}
state = transition(state, symbol);
if (state == -1) {
return false;
} else if (symbol == 'V') {
return true;
}
if (scanFromFront) {
i++;
} else {
i--;
}
}
return false;
}
}

View file

@ -1,145 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import edu.cmu.sphinx.alignment.USEnglishTokenizer;
/**
* Represents an ordered set of {@link Item}s and their associated children. A
* relation has a name and a list of items, and is added to an
* {@link Utterance} via an {@link USEnglishTokenizer}.
*/
public class Relation {
private String name;
private Utterance owner;
private Item head;
private Item tail;
/**
* Name of the relation that contains tokens from the original input text.
* This is the first thing to be added to the utterance.
*/
public static final String TOKEN = "Token";
/**
* Name of the relation that contains the normalized version of the
* original input text.
*/
public static final String WORD = "Word";
/**
* Creates a relation.
*
* @param name the name of the Relation
* @param owner the utterance that contains this relation
*/
Relation(String name, Utterance owner) {
this.name = name;
this.owner = owner;
head = null;
tail = null;
}
/**
* Retrieves the name of this Relation.
*
* @return the name of this Relation
*/
public String getName() {
return name;
}
/**
* Gets the head of the item list.
*
* @return the head item
*/
public Item getHead() {
return head;
}
/**
* Sets the head of the item list.
*
* @param item the new head item
*/
void setHead(Item item) {
head = item;
}
/**
* Gets the tail of the item list.
*
* @return the tail item
*/
public Item getTail() {
return tail;
}
/**
* Sets the tail of the item list.
*
* @param item the new tail item
*/
void setTail(Item item) {
tail = item;
}
/**
* Adds a new item to this relation. The item added does not share its
* contents with any other item.
*
* @return the newly added item
*/
public Item appendItem() {
return appendItem(null);
}
/**
* Adds a new item to this relation. The item added shares its contents
* with the original item.
*
* @param originalItem the ItemContents that will be shared by the new item
*
* @return the newly added item
*/
public Item appendItem(Item originalItem) {
ItemContents contents;
Item newItem;
if (originalItem == null) {
contents = null;
} else {
contents = originalItem.getSharedContents();
}
newItem = new Item(this, contents);
if (head == null) {
head = newItem;
}
if (tail != null) {
tail.attach(newItem);
}
tail = newItem;
return newItem;
}
/**
* Returns the utterance that contains this relation.
*
* @return the utterance that contains this relation
*/
public Utterance getUtterance() {
return owner;
}
}

View file

@ -1,29 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.io.IOException;
import java.net.URL;
/**
* Implements a finite state machine that checks if a given string is a suffix.
*/
public class SuffixFSM extends PronounceableFSM {
/**
* Constructs a SuffixFSM.
* @param url suffix of FSM
* @throws IOException if loading failed
*/
public SuffixFSM(URL url) throws IOException {
super(url, false);
}
}

View file

@ -1,229 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import java.util.Iterator;
import edu.cmu.sphinx.alignment.Token;
/**
* Holds all the data for an utterance to be spoken. It is incrementally
* modified by various UtteranceProcessor implementations. An utterance
* contains a set of Features (essential a set of properties) and a set of
* Relations. A Relation is an ordered set of Item graphs. The utterance
* contains a set of features and implements FeatureSet so that applications
* can set/get features directly from the utterance. If a feature query is not
* found in the utterance feature set, the query is forwarded to the FeatureSet
* of the voice associated with the utterance.
*/
public class Utterance {
private FeatureSet features;
private FeatureSet relations;
/**
* Creates an utterance with the given set of tokenized text.
*
* @param tokenizer tokenizer to use for utterance.
*/
public Utterance(CharTokenizer tokenizer) {
features = new FeatureSet();
relations = new FeatureSet();
setTokenList(tokenizer);
}
/**
* Creates a new relation with the given name and adds it to this
* utterance.
*
* @param name the name of the new relation
*
* @return the newly created relation
*/
public Relation createRelation(String name) {
Relation relation = new Relation(name, this);
relations.setObject(name, relation);
return relation;
}
/**
* Retrieves a relation from this utterance.
*
* @param name the name of the Relation
*
* @return the relation or null if the relation is not found
*/
public Relation getRelation(String name) {
return (Relation) relations.getObject(name);
}
/**
* Determines if this utterance contains a relation with the given name.
*
* @param name the name of the relation of interest.
* @return if relation is present
*/
public boolean hasRelation(String name) {
return relations.isPresent(name);
}
/**
* Removes the named feature from this set of features.
*
* @param name the name of the feature of interest
*/
public void remove(String name) {
features.remove(name);
}
/**
* Convenience method that sets the named feature as an int.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setInt(String name, int value) {
features.setInt(name, value);
}
/**
* Convenience method that sets the named feature as a float.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setFloat(String name, float value) {
features.setFloat(name, value);
}
/**
* Convenience method that sets the named feature as a String.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setString(String name, String value) {
features.setString(name, value);
}
/**
* Sets the named feature.
*
* @param name the name of the feature
* @param value the value of the feature
*/
public void setObject(String name, Object value) {
features.setObject(name, value);
}
/**
* Returns the Item in the given Relation associated with the given time.
*
* @param relation the name of the relation
* @param time the time
* @return the item
*/
public Item getItem(String relation, float time) {
Relation segmentRelation = null;
String pathName = null;
if (relation.equals(Relation.WORD)) {
pathName = "R:SylStructure.parent.parent.R:Word";
} else if (relation.equals(Relation.TOKEN)) {
pathName = "R:SylStructure.parent.parent.R:Token.parent";
} else {
throw new IllegalArgumentException(
"Utterance.getItem(): relation cannot be " + relation);
}
PathExtractor path = new PathExtractor(pathName, false);
// get the Item in the Segment Relation with the given time
Item segmentItem = getItem(segmentRelation, time);
if (segmentItem != null) {
return path.findItem(segmentItem);
} else {
return null;
}
}
private static Item getItem(Relation segmentRelation, float time) {
Item lastSegment = segmentRelation.getTail();
// If given time is closer to the front than the end, search from
// the front; otherwise, start search from end
// this might not be the best strategy though.
float lastSegmentEndTime = getSegmentEnd(lastSegment);
if (time < 0 || lastSegmentEndTime < time) {
return null;
} else if (lastSegmentEndTime - time > time) {
return findFromFront(segmentRelation, time);
} else {
return findFromEnd(segmentRelation, time);
}
}
private static Item findFromEnd(Relation segmentRelation, float time) {
Item item = segmentRelation.getTail();
while (item != null && getSegmentEnd(item) > time) {
item = item.getPrevious();
}
if (item != segmentRelation.getTail()) {
item = item.getNext();
}
return item;
}
private static Item findFromFront(Relation segmentRelation, float time) {
Item item = segmentRelation.getHead();
while (item != null && time > getSegmentEnd(item)) {
item = item.getNext();
}
return item;
}
private static float getSegmentEnd(Item segment) {
FeatureSet segmentFeatureSet = segment.getFeatures();
return segmentFeatureSet.getFloat("end");
}
/**
* Sets the token list for this utterance. Note that this could be
* optimized by turning the token list directly into the token relation.
*
* @param tokenList the tokenList
*
*/
private void setTokenList(Iterator<Token> tokenizer) {
Relation relation = createRelation(Relation.TOKEN);
while (tokenizer.hasNext()) {
Token token = tokenizer.next();
String tokenWord = token.getWord();
if (tokenWord != null && tokenWord.length() > 0) {
Item item = relation.appendItem();
FeatureSet featureSet = item.getFeatures();
featureSet.setString("name", tokenWord);
featureSet.setString("whitespace", token.getWhitespace());
featureSet.setString("prepunctuation",
token.getPrepunctuation());
featureSet.setString("punc", token.getPostpunctuation());
featureSet.setString("file_pos",
String.valueOf(token.getPosition()));
featureSet.setString("line_number",
String.valueOf(token.getLineNumber()));
}
}
}
}

View file

@ -1,85 +0,0 @@
/**
* Portions Copyright 2001 Sun Microsystems, Inc.
* Portions Copyright 1999-2001 Language Technologies Institute,
* Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.alignment.tokenizer;
import edu.cmu.sphinx.alignment.USEnglishTokenizer;
/**
* Helper class to add words and breaks into a Relation object.
*/
public class WordRelation {
private Relation relation;
private USEnglishTokenizer tokenToWords;
private WordRelation(Relation parentRelation, USEnglishTokenizer tokenToWords) {
this.relation = parentRelation;
this.tokenToWords = tokenToWords;
}
/**
* Creates a WordRelation object with the given utterance and TokenToWords.
*
* @param utterance the Utterance from which to create a Relation
* @param tokenToWords the TokenToWords object to use
*
* @return a WordRelation object
*/
public static WordRelation createWordRelation(Utterance utterance,
USEnglishTokenizer tokenToWords) {
Relation relation = utterance.createRelation(Relation.WORD);
return new WordRelation(relation, tokenToWords);
}
/**
* Adds a break as a feature to the last item in the list.
*/
public void addBreak() {
Item wordItem = (Item) relation.getTail();
if (wordItem != null) {
FeatureSet featureSet = wordItem.getFeatures();
featureSet.setString("break", "1");
}
}
/**
* Adds a word as an Item to this WordRelation object.
*
* @param word the word to add
*/
public void addWord(String word) {
Item tokenItem = tokenToWords.getTokenItem();
Item wordItem = tokenItem.createDaughter();
FeatureSet featureSet = wordItem.getFeatures();
featureSet.setString("name", word);
relation.appendItem(wordItem);
}
/**
* Sets the last Item in this WordRelation to the given word.
*
* @param word the word to set
*/
public void setLastWord(String word) {
Item lastItem = relation.getTail();
FeatureSet featureSet = lastItem.getFeatures();
featureSet.setString("name", word);
}
/**
* Returns the last item in this WordRelation.
*
* @return the last item
*/
public Item getTail() {
return relation.getTail();
}
}

View file

@ -1,81 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import java.io.IOException;
import edu.cmu.sphinx.decoder.adaptation.ClusteredDensityFileData;
import edu.cmu.sphinx.decoder.adaptation.Stats;
import edu.cmu.sphinx.decoder.adaptation.Transform;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Sphinx3Loader;
import edu.cmu.sphinx.recognizer.Recognizer;
import edu.cmu.sphinx.result.Result;
/**
* Base class for high-level speech recognizers.
*/
public class AbstractSpeechRecognizer {
protected final Context context;
protected final Recognizer recognizer;
protected ClusteredDensityFileData clusters;
protected final SpeechSourceProvider speechSourceProvider;
/**
* Constructs recognizer object using provided configuration.
* @param configuration initial configuration
* @throws IOException if IO went wrong
*/
public AbstractSpeechRecognizer(Configuration configuration)
throws IOException
{
this(new Context(configuration));
}
protected AbstractSpeechRecognizer(Context context) throws IOException {
this.context = context;
recognizer = context.getInstance(Recognizer.class);
speechSourceProvider = new SpeechSourceProvider();
}
/**
* Returns result of the recognition.
*
* @return recognition result or {@code null} if there is no result, e.g., because the
* microphone or input stream has been closed
*/
public SpeechResult getResult() {
Result result = recognizer.recognize();
return null == result ? null : new SpeechResult(result);
}
public Stats createStats(int numClasses) {
clusters = new ClusteredDensityFileData(context.getLoader(), numClasses);
return new Stats(context.getLoader(), clusters);
}
public void setTransform(Transform transform) {
if (clusters != null) {
context.getLoader().update(transform, clusters);
}
}
public void loadTransform(String path, int numClass) throws Exception {
clusters = new ClusteredDensityFileData(context.getLoader(), numClass);
Transform transform = new Transform((Sphinx3Loader)context.getLoader(), numClass);
transform.load(path);
context.getLoader().update(transform, clusters);
}
}

View file

@ -1,139 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
/**
* Represents common configuration options.
*
* This configuration is used by high-level recognition classes.
*
* @see SpeechAligner
* @see LiveSpeechRecognizer
* @see StreamSpeechRecognizer
*/
public class Configuration {
private String acousticModelPath;
private String dictionaryPath;
private String languageModelPath;
private String grammarPath;
private String grammarName;
private int sampleRate = 16000;
private boolean useGrammar = false;
/**
* @return path to acoustic model
*/
public String getAcousticModelPath() {
return acousticModelPath;
}
/**
* Sets path to acoustic model.
* @param acousticModelPath URL of the acoustic model
*/
public void setAcousticModelPath(String acousticModelPath) {
this.acousticModelPath = acousticModelPath;
}
/**
* @return path to dictionary.
*/
public String getDictionaryPath() {
return dictionaryPath;
}
/**
* Sets path to dictionary.
* @param dictionaryPath URL of the dictionary
*/
public void setDictionaryPath(String dictionaryPath) {
this.dictionaryPath = dictionaryPath;
}
/**
* @return path to the language model
*/
public String getLanguageModelPath() {
return languageModelPath;
}
/**
* Sets paths to language model resource.
* @param languageModelPath URL of the language model
*/
public void setLanguageModelPath(String languageModelPath) {
this.languageModelPath = languageModelPath;
}
/**
* @return grammar path
*/
public String getGrammarPath() {
return grammarPath;
}
/**
* Sets path to grammar resources.
* @param grammarPath URL of the grammar
*/
public void setGrammarPath(String grammarPath) {
this.grammarPath = grammarPath;
}
/**
* @return grammar name
*/
public String getGrammarName() {
return grammarName;
}
/**
* Sets grammar name if fixed grammar is used.
* @param grammarName of the grammar
*/
public void setGrammarName(String grammarName) {
this.grammarName = grammarName;
}
/**
* @return whether fixed grammar should be used instead of language model.
*/
public boolean getUseGrammar() {
return useGrammar;
}
/**
* Sets whether fixed grammar should be used instead of language model.
* @param useGrammar to use grammar or language model
*/
public void setUseGrammar(boolean useGrammar) {
this.useGrammar = useGrammar;
}
/**
* @return the configured sample rate.
*/
public int getSampleRate() {
return sampleRate;
}
/**
* Sets sample rate for the input stream.
* @param sampleRate sample rate in Hertz
*/
public void setSampleRate(int sampleRate) {
this.sampleRate = sampleRate;
}
}

View file

@ -1,222 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import static edu.cmu.sphinx.util.props.ConfigurationManagerUtils.resourceToURL;
import static edu.cmu.sphinx.util.props.ConfigurationManagerUtils.setProperty;
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import edu.cmu.sphinx.frontend.frequencywarp.MelFrequencyFilterBank2;
import edu.cmu.sphinx.frontend.util.StreamDataSource;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Loader;
import edu.cmu.sphinx.util.TimeFrame;
import edu.cmu.sphinx.util.props.Configurable;
import edu.cmu.sphinx.util.props.ConfigurationManager;
/**
* Helps to tweak configuration without touching XML-file directly.
*/
public class Context {
private final ConfigurationManager configurationManager;
/**
* Constructs builder that uses default XML configuration.
* @param config configuration
* @throws MalformedURLException if failed to load configuration file
*/
public Context(Configuration config)
throws IOException, MalformedURLException
{
this("resource:/edu/cmu/sphinx/api/default.config.xml", config);
}
/**
* Constructs builder using user-supplied XML configuration.
*
* @param path path to XML-resource with configuration
* @param config configuration
* @throws MalformedURLException if failed to load configuration file
* @throws IOException if failed to load configuration file
*/
public Context(String path, Configuration config)
throws IOException, MalformedURLException
{
configurationManager = new ConfigurationManager(resourceToURL(path));
setAcousticModel(config.getAcousticModelPath());
setDictionary(config.getDictionaryPath());
if (null != config.getGrammarPath() && config.getUseGrammar())
setGrammar(config.getGrammarPath(), config.getGrammarName());
if (null != config.getLanguageModelPath() && !config.getUseGrammar())
setLanguageModel(config.getLanguageModelPath());
setSampleRate(config.getSampleRate());
// Force ConfigurationManager to build the whole graph
// in order to enable instance lookup by class.
configurationManager.lookup("recognizer");
}
/**
* Sets acoustic model location.
*
* It also reads feat.params which should be located at the root of
* acoustic model and sets corresponding parameters of
* {@link MelFrequencyFilterBank2} instance.
*
* @param path path to directory with acoustic model files
*
* @throws IOException if failed to read feat.params
*/
public void setAcousticModel(String path) throws IOException {
setLocalProperty("acousticModelLoader->location", path);
setLocalProperty("dictionary->fillerPath", path + "/noisedict");
}
/**
* Sets dictionary.
*
* @param path path to directory with dictionary files
*/
public void setDictionary(String path) {
setLocalProperty("dictionary->dictionaryPath", path);
}
/**
* Sets sampleRate.
*
* @param sampleRate sample rate of the input stream.
*/
public void setSampleRate(int sampleRate) {
setLocalProperty("dataSource->sampleRate", Integer.toString(sampleRate));
}
/**
* Sets path to the grammar files.
*
* Enables static grammar and disables probabilistic language model.
* JSGF and GrXML formats are supported.
*
* @param path path to the grammar files
* @param name name of the main grammar to use
* @see Context#setLanguageModel(String)
*/
public void setGrammar(String path, String name) {
// TODO: use a single param of type File, cache directory part
if (name.endsWith(".grxml")) {
setLocalProperty("grXmlGrammar->grammarLocation", path + name);
setLocalProperty("flatLinguist->grammar", "grXmlGrammar");
} else {
setLocalProperty("jsgfGrammar->grammarLocation", path);
setLocalProperty("jsgfGrammar->grammarName", name);
setLocalProperty("flatLinguist->grammar", "jsgfGrammar");
}
setLocalProperty("decoder->searchManager", "simpleSearchManager");
}
/**
* Sets path to the language model.
*
* Enables probabilistic language model and disables static grammar.
* Currently it supports ".lm" and ".dmp" file formats.
*
* @param path path to the language model file
* @see Context#setGrammar(String, String)
*
* @throws IllegalArgumentException if path ends with unsupported extension
*/
public void setLanguageModel(String path) {
if (path.endsWith(".lm")) {
setLocalProperty("simpleNGramModel->location", path);
setLocalProperty(
"lexTreeLinguist->languageModel", "simpleNGramModel");
} else if (path.endsWith(".dmp")) {
setLocalProperty("largeTrigramModel->location", path);
setLocalProperty(
"lexTreeLinguist->languageModel", "largeTrigramModel");
} else {
throw new IllegalArgumentException(
"Unknown format extension: " + path);
}
//search manager for LVCSR is set by deafult
}
public void setSpeechSource(InputStream stream, TimeFrame timeFrame) {
getInstance(StreamDataSource.class).setInputStream(stream, timeFrame);
setLocalProperty("trivialScorer->frontend", "liveFrontEnd");
}
/**
* Sets byte stream as the speech source.
*
* @param stream stream to process
*/
public void setSpeechSource(InputStream stream) {
getInstance(StreamDataSource.class).setInputStream(stream);
setLocalProperty("trivialScorer->frontend", "liveFrontEnd");
}
/**
* Sets property within a "component" tag in configuration.
*
* Use this method to alter "value" property of a "property" tag inside a
* "component" tag of the XML configuration.
*
* @param name property name
* @param value property value
* @see Context#setGlobalProperty(String, Object)
*/
public void setLocalProperty(String name, Object value) {
setProperty(configurationManager, name, value.toString());
}
/**
* Sets property of a top-level "property" tag.
*
* Use this method to alter "value" property of a "property" tag whose
* parent is the root tag "config" of the XML configuration.
*
* @param name property name
* @param value property value
* @see Context#setLocalProperty(String, Object)
*/
public void setGlobalProperty(String name, Object value) {
configurationManager.setGlobalProperty(name, value.toString());
}
/**
* Returns instance of the XML configuration by its class.
*
* @param clazz class to look up
* @param <C> generic
* @return instance of the specified class or null
*/
public <C extends Configurable> C getInstance(Class<C> clazz) {
return configurationManager.lookup(clazz);
}
/**
* Returns the Loader object used for loading the acoustic model.
*
* @return the loader object
*/
public Loader getLoader(){
return (Loader) configurationManager.lookup("acousticModelLoader");
}
}

View file

@ -1,62 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import java.io.IOException;
import edu.cmu.sphinx.frontend.util.StreamDataSource;
/**
* High-level class for live speech recognition.
*/
public class LiveSpeechRecognizer extends AbstractSpeechRecognizer {
private final Microphone microphone;
/**
* Constructs new live recognition object.
*
* @param configuration common configuration
* @throws IOException if model IO went wrong
*/
public LiveSpeechRecognizer(Configuration configuration) throws IOException
{
super(configuration);
microphone = speechSourceProvider.getMicrophone();
context.getInstance(StreamDataSource.class)
.setInputStream(microphone.getStream());
}
/**
* Starts recognition process.
*
* @param clear clear cached microphone data
* @see LiveSpeechRecognizer#stopRecognition()
*/
public void startRecognition(boolean clear) {
recognizer.allocate();
microphone.startRecording();
}
/**
* Stops recognition process.
*
* Recognition process is paused until the next call to startRecognition.
*
* @see LiveSpeechRecognizer#startRecognition(boolean)
*/
public void stopRecognition() {
microphone.stopRecording();
recognizer.deallocate();
}
}

View file

@ -1,54 +0,0 @@
/*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.api;
import java.io.InputStream;
import javax.sound.sampled.*;
/**
* InputStream adapter
*/
public class Microphone {
private final TargetDataLine line;
private final InputStream inputStream;
public Microphone(
float sampleRate,
int sampleSize,
boolean signed,
boolean bigEndian) {
AudioFormat format =
new AudioFormat(sampleRate, sampleSize, 1, signed, bigEndian);
try {
line = AudioSystem.getTargetDataLine(format);
line.open();
} catch (LineUnavailableException e) {
throw new IllegalStateException(e);
}
inputStream = new AudioInputStream(line);
}
public void startRecording() {
line.start();
}
public void stopRecording() {
line.stop();
}
public InputStream getStream() {
return inputStream;
}
}

View file

@ -1,263 +0,0 @@
/*
* Copyright 2014 Alpha Cephei Inc.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Queue;
import java.util.TreeMap;
import java.util.logging.Logger;
import edu.cmu.sphinx.alignment.LongTextAligner;
import edu.cmu.sphinx.alignment.SimpleTokenizer;
import edu.cmu.sphinx.alignment.TextTokenizer;
import edu.cmu.sphinx.linguist.language.grammar.AlignerGrammar;
import edu.cmu.sphinx.linguist.language.ngram.DynamicTrigramModel;
import edu.cmu.sphinx.recognizer.Recognizer;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.result.WordResult;
import edu.cmu.sphinx.util.Range;
import edu.cmu.sphinx.util.TimeFrame;
public class SpeechAligner {
private final Logger logger = Logger.getLogger(getClass().getSimpleName());
private static final int TUPLE_SIZE = 3;
private final Context context;
private final Recognizer recognizer;
private final AlignerGrammar grammar;
private final DynamicTrigramModel languageModel;
private TextTokenizer tokenizer;
public SpeechAligner(String amPath, String dictPath, String g2pPath) throws MalformedURLException, IOException {
Configuration configuration = new Configuration();
configuration.setAcousticModelPath(amPath);
configuration.setDictionaryPath(dictPath);
context = new Context(configuration);
if (g2pPath != null) {
context.setLocalProperty("dictionary->g2pModelPath", g2pPath);
context.setLocalProperty("dictionary->g2pMaxPron", "2");
}
context.setLocalProperty("lexTreeLinguist->languageModel", "dynamicTrigramModel");
recognizer = context.getInstance(Recognizer.class);
grammar = context.getInstance(AlignerGrammar.class);
languageModel = context.getInstance(DynamicTrigramModel.class);
setTokenizer(new SimpleTokenizer());
}
public List<WordResult> align(URL audioUrl, String transcript) throws IOException {
return align(audioUrl, getTokenizer().expand(transcript));
}
/**
* Align audio to sentence transcript
*
* @param audioUrl audio file URL to process
* @param sentenceTranscript cleaned transcript
* @return List of aligned words with timings
* @throws IOException if IO went wrong
*/
public List<WordResult> align(URL audioUrl, List<String> sentenceTranscript) throws IOException {
List<String> transcript = sentenceToWords(sentenceTranscript);
LongTextAligner aligner = new LongTextAligner(transcript, TUPLE_SIZE);
Map<Integer, WordResult> alignedWords = new TreeMap<Integer, WordResult>();
Queue<Range> ranges = new LinkedList<Range>();
Queue<List<String>> texts = new ArrayDeque<List<String>>();
Queue<TimeFrame> timeFrames = new ArrayDeque<TimeFrame>();
ranges.offer(new Range(0, transcript.size()));
texts.offer(transcript);
TimeFrame totalTimeFrame = TimeFrame.INFINITE;
timeFrames.offer(totalTimeFrame);
long lastFrame = TimeFrame.INFINITE.getEnd();
languageModel.setText(sentenceTranscript);
for (int i = 0; i < 4; ++i) {
if (i == 1) {
context.setLocalProperty("decoder->searchManager", "alignerSearchManager");
}
while (!texts.isEmpty()) {
assert texts.size() == ranges.size();
assert texts.size() == timeFrames.size();
List<String> text = texts.poll();
TimeFrame frame = timeFrames.poll();
Range range = ranges.poll();
logger.info("Aligning frame " + frame + " to text " + text + " range " + range);
recognizer.allocate();
if (i >= 1) {
grammar.setWords(text);
}
context.setSpeechSource(audioUrl.openStream(), frame);
List<WordResult> hypothesis = new ArrayList<WordResult>();
Result result;
while (null != (result = recognizer.recognize())) {
logger.info("Utterance result " + result.getTimedBestResult(true));
hypothesis.addAll(result.getTimedBestResult(false));
}
if (i == 0) {
if (hypothesis.size() > 0) {
lastFrame = hypothesis.get(hypothesis.size() - 1).getTimeFrame().getEnd();
}
}
List<String> words = new ArrayList<String>();
for (WordResult wr : hypothesis) {
words.add(wr.getWord().getSpelling());
}
int[] alignment = aligner.align(words, range);
List<WordResult> results = hypothesis;
logger.info("Decoding result is " + results);
// dumpAlignment(transcript, alignment, results);
dumpAlignmentStats(transcript, alignment, results);
for (int j = 0; j < alignment.length; j++) {
if (alignment[j] != -1) {
alignedWords.put(alignment[j], hypothesis.get(j));
}
}
recognizer.deallocate();
}
scheduleNextAlignment(transcript, alignedWords, ranges, texts, timeFrames, lastFrame);
}
return new ArrayList<WordResult>(alignedWords.values());
}
public List<String> sentenceToWords(List<String> sentenceTranscript) {
ArrayList<String> transcript = new ArrayList<String>();
for (String sentence : sentenceTranscript) {
String[] words = sentence.split("\\s+");
for (String word : words) {
if (word.length() > 0)
transcript.add(word);
}
}
return transcript;
}
private void dumpAlignmentStats(List<String> transcript, int[] alignment, List<WordResult> results) {
int insertions = 0;
int deletions = 0;
int size = transcript.size();
int[] aid = alignment;
int lastId = -1;
for (int ij = 0; ij < aid.length; ++ij) {
if (aid[ij] == -1) {
insertions++;
} else {
if (aid[ij] - lastId > 1) {
deletions += aid[ij] - lastId;
}
lastId = aid[ij];
}
}
if (lastId >= 0 && transcript.size() - lastId > 1) {
deletions += transcript.size() - lastId;
}
logger.info(String.format("Size %d deletions %d insertions %d error rate %.2f", size, insertions, deletions,
(insertions + deletions) / ((float) size) * 100f));
}
private void scheduleNextAlignment(List<String> transcript, Map<Integer, WordResult> alignedWords, Queue<Range> ranges,
Queue<List<String>> texts, Queue<TimeFrame> timeFrames, long lastFrame) {
int prevKey = 0;
long prevStart = 0;
for (Map.Entry<Integer, WordResult> e : alignedWords.entrySet()) {
if (e.getKey() - prevKey > 1) {
checkedOffer(transcript, texts, timeFrames, ranges, prevKey, e.getKey() + 1, prevStart, e.getValue()
.getTimeFrame().getEnd());
}
prevKey = e.getKey();
prevStart = e.getValue().getTimeFrame().getStart();
}
if (transcript.size() - prevKey > 1) {
checkedOffer(transcript, texts, timeFrames, ranges, prevKey, transcript.size(), prevStart, lastFrame);
}
}
public void dumpAlignment(List<String> transcript, int[] alignment, List<WordResult> results) {
logger.info("Alignment");
int[] aid = alignment;
int lastId = -1;
for (int ij = 0; ij < aid.length; ++ij) {
if (aid[ij] == -1) {
logger.info(String.format("+ %s", results.get(ij)));
} else {
if (aid[ij] - lastId > 1) {
for (String result1 : transcript.subList(lastId + 1, aid[ij])) {
logger.info(String.format("- %-25s", result1));
}
} else {
logger.info(String.format(" %-25s", transcript.get(aid[ij])));
}
lastId = aid[ij];
}
}
if (lastId >= 0 && transcript.size() - lastId > 1) {
for (String result1 : transcript.subList(lastId + 1, transcript.size())) {
logger.info(String.format("- %-25s", result1));
}
}
}
private void checkedOffer(List<String> transcript, Queue<List<String>> texts, Queue<TimeFrame> timeFrames,
Queue<Range> ranges, int start, int end, long timeStart, long timeEnd) {
double wordDensity = ((double) (timeEnd - timeStart)) / (end - start);
// Skip range if it's too short, average word is less than 10
// milliseconds
if (wordDensity < 10.0 && (end - start) > 3) {
logger.info("Skipping text range due to a high density " + transcript.subList(start, end).toString());
return;
}
texts.offer(transcript.subList(start, end));
timeFrames.offer(new TimeFrame(timeStart, timeEnd));
ranges.offer(new Range(start, end - 1));
}
public TextTokenizer getTokenizer() {
return tokenizer;
}
public void setTokenizer(TextTokenizer wordExpander) {
this.tokenizer = wordExpander;
}
}

View file

@ -1,91 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import java.util.Collection;
import java.util.HashSet;
import java.util.List;
import edu.cmu.sphinx.recognizer.Recognizer;
import edu.cmu.sphinx.result.*;
/**
* High-level wrapper for {@link Result} instance.
*/
public final class SpeechResult {
private final Result result;
private final Lattice lattice;
/**
* Constructs recognition result based on {@link Result} object.
*
* @param result recognition result returned by {@link Recognizer}
*/
public SpeechResult(Result result) {
this.result = result;
if (result.toCreateLattice()) {
lattice = new Lattice(result);
new LatticeOptimizer(lattice).optimize();
lattice.computeNodePosteriors(1.0f);
} else
lattice = null;
}
/**
* Returns {@link List} of words of the recognition result.
* Within the list words are ordered by time frame.
*
* @return words that form the result
*/
public List<WordResult> getWords() {
return lattice != null ? lattice.getWordResultPath() : result.getTimedBestResult(false);
}
/**
* @return string representation of the result.
*/
public String getHypothesis() {
return result.getBestResultNoFiller();
}
/**
* Return N best hypothesis.
*
* @param n number of hypothesis to return
* @return {@link Collection} of several best hypothesis
*/
public Collection<String> getNbest(int n) {
if (lattice == null)
return new HashSet<String>();
return new Nbest(lattice).getNbest(n);
}
/**
* Returns lattice for the recognition result.
*
* @return lattice object
*/
public Lattice getLattice() {
return lattice;
}
/**
* Return Result object of current SpeechResult
*
* @return Result object stored in this.result
*/
public Result getResult() {
return result;
}
}

View file

@ -1,20 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
public class SpeechSourceProvider {
Microphone getMicrophone() {
return new Microphone(16000, 16, true, false);
}
}

View file

@ -1,66 +0,0 @@
/*
* Copyright 2013 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*/
package edu.cmu.sphinx.api;
import java.io.IOException;
import java.io.InputStream;
import edu.cmu.sphinx.util.TimeFrame;
/**
* Speech recognizer that works with audio resources.
*
* @see LiveSpeechRecognizer live speech recognizer
*/
public class StreamSpeechRecognizer extends AbstractSpeechRecognizer {
/**
* Constructs new stream recognizer.
*
* @param configuration configuration
* @throws IOException error occured during model load
*/
public StreamSpeechRecognizer(Configuration configuration)
throws IOException
{
super(configuration);
}
public void startRecognition(InputStream stream) {
startRecognition(stream, TimeFrame.INFINITE);
}
/**
* Starts recognition process.
*
* Starts recognition process and optionally clears previous data.
*
* @param stream input stream to process
* @param timeFrame time range of the stream to process
* @see StreamSpeechRecognizer#stopRecognition()
*/
public void startRecognition(InputStream stream, TimeFrame timeFrame) {
recognizer.allocate();
context.setSpeechSource(stream, timeFrame);
}
/**
* Stops recognition process.
*
* Recognition process is paused until the next call to startRecognition.
*
* @see StreamSpeechRecognizer#startRecognition(InputStream, TimeFrame)
*/
public void stopRecognition() {
recognizer.deallocate();
}
}

View file

@ -1,154 +0,0 @@
/*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder;
import edu.cmu.sphinx.decoder.search.SearchManager;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.props.*;
import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;
/** An abstract decoder which implements all functionality which is independent of the used decoding-paradigm (pull/push). */
public abstract class AbstractDecoder implements ResultProducer, Configurable {
/**
* The property that defines the name of the search manager to use
* */
@S4Component(type = SearchManager.class)
public final static String PROP_SEARCH_MANAGER = "searchManager";
protected SearchManager searchManager;
@S4ComponentList(type = ResultListener.class)
public static final String PROP_RESULT_LISTENERS = "resultListeners";
protected final List<ResultListener> resultListeners = new ArrayList<ResultListener>();
/**
* If set to true the used search-manager will be automatically allocated
* in <code>newProperties()</code>.
* */
@S4Boolean(defaultValue = false)
public static final String AUTO_ALLOCATE = "autoAllocate";
/**
* If set to <code>false</code> the used search-manager all registered
* result listeners will be notified only for final results. Per default
* non-final results don't trigger notification, because in most
* application the utterance final result will be sufficient.
*/
@S4Boolean(defaultValue = false)
public static final String FIRE_NON_FINAL_RESULTS = "fireNonFinalResults";
private boolean fireNonFinalResults;
private String name;
protected Logger logger;
public AbstractDecoder() {
}
/**
* Abstract decoder to implement live and batch recognizers
* @param searchManager search manager to use
* @param fireNonFinalResults to fire result during decoding
* @param autoAllocate automatic allocate all components
* @param resultListeners listeners to get noification
*/
public AbstractDecoder(SearchManager searchManager, boolean fireNonFinalResults, boolean autoAllocate, List<ResultListener> resultListeners) {
String name = getClass().getName();
init( name, Logger.getLogger(name),
searchManager, fireNonFinalResults, autoAllocate, resultListeners);
}
/**
* Decode frames until recognition is complete
*
* @param referenceText the reference text (or null)
* @return a result
*/
public abstract Result decode(String referenceText);
public void newProperties(PropertySheet ps) throws PropertyException {
init( ps.getInstanceName(), ps.getLogger(), (SearchManager) ps.getComponent(PROP_SEARCH_MANAGER), ps.getBoolean(FIRE_NON_FINAL_RESULTS), ps.getBoolean(AUTO_ALLOCATE), ps.getComponentList(PROP_RESULT_LISTENERS, ResultListener.class));
}
private void init(String name, Logger logger, SearchManager searchManager, boolean fireNonFinalResults, boolean autoAllocate, List<ResultListener> listeners) {
this.name = name;
this.logger = logger;
this.searchManager = searchManager;
this.fireNonFinalResults = fireNonFinalResults;
if (autoAllocate) {
searchManager.allocate();
}
for (ResultListener listener : listeners) {
addResultListener(listener);
}
}
/** Allocate resources necessary for decoding */
public void allocate() {
searchManager.allocate();
}
/** Deallocate resources */
public void deallocate() {
searchManager.deallocate();
}
/**
* Adds a result listener to this recognizer. A result listener is called whenever a new result is generated by the
* recognizer. This method can be called in any state.
*
* @param resultListener the listener to add
*/
public void addResultListener(ResultListener resultListener) {
resultListeners.add(resultListener);
}
/**
* Removes a previously added result listener. This method can be called in any state.
*
* @param resultListener the listener to remove
*/
public void removeResultListener(ResultListener resultListener) {
resultListeners.remove(resultListener);
}
/**
* Fires new results as soon as they become available.
*
* @param result the new result
*/
protected void fireResultListeners(Result result) {
if (fireNonFinalResults || result.isFinal()) {
for (ResultListener resultListener : resultListeners) {
resultListener.newResult(result);
}
}else {
logger.finer("skipping non-final result " + result);
}
}
@Override
public String toString() {
return name;
}
}

View file

@ -1,74 +0,0 @@
/*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Integer;
import edu.cmu.sphinx.decoder.search.SearchManager;
import java.util.List;
/** The primary decoder class */
public class Decoder extends AbstractDecoder {
public Decoder() {
// Keep this or else XML configuration fails.
}
/** The property for the number of features to recognize at once. */
@S4Integer(defaultValue = Integer.MAX_VALUE)
public final static String PROP_FEATURE_BLOCK_SIZE = "featureBlockSize";
private int featureBlockSize;
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
featureBlockSize = ps.getInt(PROP_FEATURE_BLOCK_SIZE);
}
/**
* Main decoder
*
* @param searchManager search manager to configure search space
* @param fireNonFinalResults should we notify about non-final results
* @param autoAllocate automatic allocation of all componenets
* @param resultListeners listeners to get signals
* @param featureBlockSize frequency of notification about results
*/
public Decoder( SearchManager searchManager, boolean fireNonFinalResults, boolean autoAllocate, List<ResultListener> resultListeners, int featureBlockSize) {
super( searchManager, fireNonFinalResults, autoAllocate, resultListeners);
this.featureBlockSize = featureBlockSize;
}
/**
* Decode frames until recognition is complete.
*
* @param referenceText the reference text (or null)
* @return a result
*/
@Override
public Result decode(String referenceText) {
searchManager.startRecognition();
Result result;
do {
result = searchManager.recognize(featureBlockSize);
if (result != null) {
result.setReferenceText(referenceText);
fireResultListeners(result);
}
} while (result != null && !result.isFinal());
searchManager.stopRecognition();
return result;
}
}

View file

@ -1,104 +0,0 @@
/*
*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder;
import edu.cmu.sphinx.frontend.*;
import edu.cmu.sphinx.frontend.endpoint.SpeechEndSignal;
import edu.cmu.sphinx.frontend.endpoint.SpeechStartSignal;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.decoder.search.SearchManager;
import java.util.List;
/**
* A decoder which does not use the common pull-principle of S4 but processes only one single frame on each call of
* <code>decode()</code>. When using this decoder, make sure that the <code>AcousticScorer</code> used by the
* <code>SearchManager</code> can access some buffered <code>Data</code>s.
*/
public class FrameDecoder extends AbstractDecoder implements DataProcessor {
private DataProcessor predecessor;
private boolean isRecognizing;
private Result result;
public FrameDecoder( SearchManager searchManager, boolean fireNonFinalResults, boolean autoAllocate, List<ResultListener> listeners) {
super(searchManager, fireNonFinalResults, autoAllocate, listeners);
}
public FrameDecoder() {
}
/**
* Decode a single frame.
*
* @param referenceText the reference text (or null)
* @return a result
*/
@Override
public Result decode(String referenceText) {
return searchManager.recognize(1);
}
public Data getData() throws DataProcessingException {
Data d = getPredecessor().getData();
if (isRecognizing && (d instanceof FloatData || d instanceof DoubleData || d instanceof SpeechEndSignal)) {
result = decode(null);
if (result != null) {
fireResultListeners(result);
result = null;
}
}
// we also trigger recogntion on a DataEndSignal to allow threaded scorers to shut down correctly
if (d instanceof DataEndSignal) {
searchManager.stopRecognition();
}
if (d instanceof SpeechStartSignal) {
searchManager.startRecognition();
isRecognizing = true;
result = null;
}
if (d instanceof SpeechEndSignal) {
searchManager.stopRecognition();
//fire results which were not yet final
if (result != null)
fireResultListeners(result);
isRecognizing = false;
}
return d;
}
public DataProcessor getPredecessor() {
return predecessor;
}
public void setPredecessor(DataProcessor predecessor) {
this.predecessor = predecessor;
}
public void initialize() {
}
}

View file

@ -1,30 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder;
import edu.cmu.sphinx.util.props.Configurable;
import edu.cmu.sphinx.result.Result;
import java.util.EventListener;
/** The listener interface for being informed when new results are generated. */
public interface ResultListener extends EventListener, Configurable {
/**
* Method called when a new result is generated
*
* @param result the new result
*/
public void newResult(Result result);
}

View file

@ -1,33 +0,0 @@
/*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder;
import edu.cmu.sphinx.util.props.Configurable;
/**
* Some API-elements shared by components which are able to produce <code>Result</code>s.
*
* @see edu.cmu.sphinx.result.Result
*/
public interface ResultProducer extends Configurable {
/** Registers a new listener for <code>Result</code>.
* @param resultListener listener to add
*/
void addResultListener(ResultListener resultListener);
/** Removes a listener from this <code>ResultProducer</code>-instance.
* @param resultListener listener to remove
*/
void removeResultListener(ResultListener resultListener);
}

View file

@ -1,174 +0,0 @@
package edu.cmu.sphinx.decoder.adaptation;
import java.util.ArrayList;
import java.util.Random;
import org.apache.commons.math3.util.FastMath;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Loader;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Pool;
/**
* Used for clustering gaussians. The clustering is performed by Euclidean
* distance criterion. The "k-means" clustering algorithm is used for clustering
* the gaussians.
*
* @author Bogdan Petcu
*/
public class ClusteredDensityFileData {
private int numberOfClusters;
private int[] corespondingClass;
public ClusteredDensityFileData(Loader loader, int numberOfClusters) {
this.numberOfClusters = numberOfClusters;
kMeansClustering(loader, 30);
}
public int getNumberOfClusters() {
return this.numberOfClusters;
}
/**
* Used for accessing the index that is specific to a gaussian.
*
* @param gaussian
* provided in a i * numStates + gaussianIndex form.
* @return class index
*/
public int getClassIndex(int gaussian) {
return corespondingClass[gaussian];
}
/**
* Computes euclidean distance between 2 n-dimensional points.
*
* @param a
* - n-dimensional "a" point
* @param b
* - n-dimensional "b" point
* @return the euclidean distance between a and b.
*/
private float euclidianDistance(float[] a, float[] b) {
double s = 0, d;
for (int i = 0; i < a.length; i++) {
d = a[i] - b[i];
s += d * d;
}
return (float) FastMath.sqrt(s);
}
/**
* Checks if the two float array have the same components
*
* @param a
* - float array a
* @param b
* - float array b
* @return true if values from a are equal to the ones in b, else false.
*/
private boolean isEqual(float[] a, float[] b) {
if (a.length != b.length) {
return false;
}
for (int i = 0; i < a.length; i++) {
if (a[i] != b[i]) {
return false;
}
}
return true;
}
/**
* Performs k-means-clustering algorithm for clustering gaussians.
* Clustering is done using euclidean distance criterium.
*
* @param maxIterations
*/
private void kMeansClustering(Loader loader, int maxIterations) {
Pool<float[]> initialData = loader.getMeansPool();
ArrayList<float[]> oldCentroids = new ArrayList<float[]>(
numberOfClusters);
ArrayList<float[]> centroids = new ArrayList<float[]>(numberOfClusters);
int numberOfElements = initialData.size(), nrOfIterations = maxIterations, index;
int[] count = new int[numberOfClusters];
double distance, min;
float[] currentValue, centroid;
float[][][] array = new float[numberOfClusters][numberOfElements][];
boolean converged = false;
Random randomGenerator = new Random();
for (int i = 0; i < numberOfClusters; i++) {
index = randomGenerator.nextInt(numberOfElements);
centroids.add(initialData.get(index));
oldCentroids.add(initialData.get(index));
count[i] = 0;
}
index = 0;
while (!converged && nrOfIterations > 0) {
corespondingClass = new int[initialData.size()];
array = new float[numberOfClusters][numberOfElements][];
for (int i = 0; i < numberOfClusters; i++) {
oldCentroids.set(i, centroids.get(i));
count[i] = 0;
}
for (int i = 0; i < initialData.size(); i++) {
currentValue = initialData.get(i);
min = this.euclidianDistance(oldCentroids.get(0), currentValue);
index = 0;
for (int k = 1; k < numberOfClusters; k++) {
distance = this.euclidianDistance(oldCentroids.get(k),
currentValue);
if (distance < min) {
min = distance;
index = k;
}
}
array[index][count[index]] = currentValue;
corespondingClass[i] = index;
count[index]++;
}
for (int i = 0; i < numberOfClusters; i++) {
centroid = new float[initialData.get(0).length];
if (count[i] > 0) {
for (int j = 0; j < count[i]; j++) {
for (int k = 0; k < initialData.get(0).length; k++) {
centroid[k] += array[i][j][k];
}
}
for (int k = 0; k < initialData.get(0).length; k++) {
centroid[k] /= count[i];
}
centroids.set(i, centroid);
}
}
converged = true;
for (int i = 0; i < numberOfClusters; i++) {
converged = converged
&& (this.isEqual(centroids.get(i), oldCentroids.get(i)));
}
nrOfIterations--;
}
}
}

View file

@ -1,235 +0,0 @@
package edu.cmu.sphinx.decoder.adaptation;
import edu.cmu.sphinx.api.SpeechResult;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.frontend.FloatData;
import edu.cmu.sphinx.linguist.HMMSearchState;
import edu.cmu.sphinx.linguist.SearchState;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Loader;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Sphinx3Loader;
import edu.cmu.sphinx.util.LogMath;
/**
* This class is used for estimating a MLLR transform for each cluster of data.
* The clustering must be previously performed using
* ClusteredDensityFileData.java
*
* @author Bogdan Petcu
*/
public class Stats {
private ClusteredDensityFileData means;
private double[][][][][] regLs;
private double[][][][] regRs;
private int nrOfClusters;
private Sphinx3Loader loader;
private float varFlor;
private LogMath logMath = LogMath.getLogMath();;
public Stats(Loader loader, ClusteredDensityFileData means) {
this.loader = (Sphinx3Loader) loader;
this.nrOfClusters = means.getNumberOfClusters();
this.means = means;
this.varFlor = (float) 1e-5;
this.invertVariances();
this.init();
}
private void init() {
int len = loader.getVectorLength()[0];
this.regLs = new double[nrOfClusters][][][][];
this.regRs = new double[nrOfClusters][][][];
for (int i = 0; i < nrOfClusters; i++) {
this.regLs[i] = new double[loader.getNumStreams()][][][];
this.regRs[i] = new double[loader.getNumStreams()][][];
for (int j = 0; j < loader.getNumStreams(); j++) {
len = loader.getVectorLength()[j];
this.regLs[i][j] = new double[len][len + 1][len + 1];
this.regRs[i][j] = new double[len][len + 1];
}
}
}
public ClusteredDensityFileData getClusteredData() {
return this.means;
}
public double[][][][][] getRegLs() {
return regLs;
}
public double[][][][] getRegRs() {
return regRs;
}
/**
* Used for inverting variances.
*/
private void invertVariances() {
for (int i = 0; i < loader.getNumStates(); i++) {
for (int k = 0; k < loader.getNumGaussiansPerState(); k++) {
for (int l = 0; l < loader.getVectorLength()[0]; l++) {
if (loader.getVariancePool().get(
i * loader.getNumGaussiansPerState() + k)[l] <= 0.) {
this.loader.getVariancePool().get(
i * loader.getNumGaussiansPerState() + k)[l] = (float) 0.5;
} else if (loader.getVariancePool().get(
i * loader.getNumGaussiansPerState() + k)[l] < varFlor) {
this.loader.getVariancePool().get(
i * loader.getNumGaussiansPerState() + k)[l] = (float) (1. / varFlor);
} else {
this.loader.getVariancePool().get(
i * loader.getNumGaussiansPerState() + k)[l] = (float) (1. / loader
.getVariancePool().get(
i * loader.getNumGaussiansPerState()
+ k)[l]);
}
}
}
}
}
/**
* Computes posterior values for the each component.
*
* @param componentScores
* from which the posterior values are computed.
* @param numStreams
* Number of feature streams
* @return posterior values for all components.
*/
private float[] computePosterios(float[] componentScores, int numStreams) {
float[] posteriors = componentScores;
int step = componentScores.length / numStreams;
int startIdx = 0;
for (int i = 0; i < numStreams; i++) {
float max = posteriors[startIdx];
for (int j = startIdx + 1; j < startIdx + step; j++) {
if (posteriors[j] > max) {
max = posteriors[j];
}
}
for (int j = startIdx; j < startIdx + step; j++) {
posteriors[j] = (float) logMath.logToLinear(posteriors[j] - max);
}
startIdx += step;
}
return posteriors;
}
/**
* This method is used for directly collect and use counts. The counts are
* collected and stored separately for each cluster.
*
* @param result
* Result object to collect counts from.
* @throws Exception if something went wrong
*/
public void collect(SpeechResult result) throws Exception {
Token token = result.getResult().getBestToken();
float[] componentScore, featureVector, posteriors, tmean;
int[] len;
float dnom, wtMeanVar, wtDcountVar, wtDcountVarMean, mean;
int mId, cluster;
int numStreams, gauPerState;
if (token == null)
throw new Exception("Best token not found!");
do {
FloatData feature = (FloatData) token.getData();
SearchState ss = token.getSearchState();
if (!(ss instanceof HMMSearchState && ss.isEmitting())) {
token = token.getPredecessor();
continue;
}
componentScore = token.calculateComponentScore(feature);
featureVector = FloatData.toFloatData(feature).getValues();
mId = (int) ((HMMSearchState) token.getSearchState()).getHMMState()
.getMixtureId();
if (loader instanceof Sphinx3Loader && ((Sphinx3Loader) loader).hasTiedMixtures())
// use CI phone ID for tied mixture model
mId = ((Sphinx3Loader) loader).getSenone2Ci()[mId];
len = loader.getVectorLength();
numStreams = loader.getNumStreams();
gauPerState = loader.getNumGaussiansPerState();
posteriors = this.computePosterios(componentScore, numStreams);
int featVectorStartIdx = 0;
for (int i = 0; i < numStreams; i++) {
for (int j = 0; j < gauPerState; j++) {
cluster = means.getClassIndex(mId * numStreams
* gauPerState + i * gauPerState + j);
dnom = posteriors[i * gauPerState + j];
if (dnom > 0.) {
tmean = loader.getMeansPool().get(
mId * numStreams * gauPerState + i
* gauPerState + j);
for (int k = 0; k < len[i]; k++) {
mean = posteriors[i * gauPerState + j]
* featureVector[k + featVectorStartIdx];
wtMeanVar = mean
* loader.getVariancePool().get(
mId * numStreams * gauPerState + i
* gauPerState + j)[k];
wtDcountVar = dnom
* loader.getVariancePool().get(
mId * numStreams * gauPerState + i
* gauPerState + j)[k];
for (int p = 0; p < len[i]; p++) {
wtDcountVarMean = wtDcountVar * tmean[p];
for (int q = p; q < len[i]; q++) {
regLs[cluster][i][k][p][q] += wtDcountVarMean
* tmean[q];
}
regLs[cluster][i][k][p][len[i]] += wtDcountVarMean;
regRs[cluster][i][k][p] += wtMeanVar * tmean[p];
}
regLs[cluster][i][k][len[i]][len[i]] += wtDcountVar;
regRs[cluster][i][k][len[i]] += wtMeanVar;
}
}
}
featVectorStartIdx += len[i];
}
token = token.getPredecessor();
} while (token != null);
}
/**
* Fill lower part of Legetter's set of G matrices.
*/
public void fillRegLowerPart() {
for (int i = 0; i < this.nrOfClusters; i++) {
for (int j = 0; j < loader.getNumStreams(); j++) {
for (int l = 0; l < loader.getVectorLength()[j]; l++) {
for (int p = 0; p <= loader.getVectorLength()[j]; p++) {
for (int q = p + 1; q <= loader.getVectorLength()[j]; q++) {
regLs[i][j][l][q][p] = regLs[i][j][l][p][q];
}
}
}
}
}
}
public Transform createTransform() {
Transform transform = new Transform(loader, nrOfClusters);
transform.update(this);
return transform;
}
}

View file

@ -1,179 +0,0 @@
package edu.cmu.sphinx.decoder.adaptation;
import java.io.File;
import java.io.PrintWriter;
import java.util.Scanner;
import org.apache.commons.math3.linear.Array2DRowRealMatrix;
import org.apache.commons.math3.linear.ArrayRealVector;
import org.apache.commons.math3.linear.DecompositionSolver;
import org.apache.commons.math3.linear.LUDecomposition;
import org.apache.commons.math3.linear.RealMatrix;
import org.apache.commons.math3.linear.RealVector;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Sphinx3Loader;
public class Transform {
private float[][][][] As;
private float[][][] Bs;
private Sphinx3Loader loader;
private int nrOfClusters;
public Transform(Sphinx3Loader loader, int nrOfClusters) {
this.loader = loader;
this.nrOfClusters = nrOfClusters;
}
/**
* Used for access to A matrix.
*
* @return A matrix (representing A from A*x + B = C)
*/
public float[][][][] getAs() {
return As;
}
/**
* Used for access to B matrix.
*
* @return B matrix (representing B from A*x + B = C)
*/
public float[][][] getBs() {
return Bs;
}
/**
* Writes the transformation to file in a format that could further be used
* in Sphinx3 and Sphinx4.
*
* @param filePath path to store transform matrix
* @param index index of transform to store
* @throws Exception if something went wrong
*/
public void store(String filePath, int index) throws Exception {
PrintWriter writer = new PrintWriter(filePath, "UTF-8");
// nMllrClass
writer.println("1");
writer.println(loader.getNumStreams());
for (int i = 0; i < loader.getNumStreams(); i++) {
writer.println(loader.getVectorLength()[i]);
for (int j = 0; j < loader.getVectorLength()[i]; j++) {
for (int k = 0; k < loader.getVectorLength()[i]; ++k) {
writer.print(As[index][i][j][k]);
writer.print(" ");
}
writer.println();
}
for (int j = 0; j < loader.getVectorLength()[i]; j++) {
writer.print(Bs[index][i][j]);
writer.print(" ");
}
writer.println();
for (int j = 0; j < loader.getVectorLength()[i]; j++) {
writer.print("1.0 ");
}
writer.println();
}
writer.close();
}
/**
* Used for computing the actual transformations (A and B matrices). These
* are stored in As and Bs.
*/
private void computeMllrTransforms(double[][][][][] regLs,
double[][][][] regRs) {
int len;
DecompositionSolver solver;
RealMatrix coef;
RealVector vect, ABloc;
for (int c = 0; c < nrOfClusters; c++) {
this.As[c] = new float[loader.getNumStreams()][][];
this.Bs[c] = new float[loader.getNumStreams()][];
for (int i = 0; i < loader.getNumStreams(); i++) {
len = loader.getVectorLength()[i];
this.As[c][i] = new float[len][len];
this.Bs[c][i] = new float[len];
for (int j = 0; j < len; ++j) {
coef = new Array2DRowRealMatrix(regLs[c][i][j], false);
solver = new LUDecomposition(coef).getSolver();
vect = new ArrayRealVector(regRs[c][i][j], false);
ABloc = solver.solve(vect);
for (int k = 0; k < len; ++k) {
this.As[c][i][j][k] = (float) ABloc.getEntry(k);
}
this.Bs[c][i][j] = (float) ABloc.getEntry(len);
}
}
}
}
/**
* Read the transformation from a file
*
* @param filePath file path to load transform
* @throws Exception if something went wrong
*/
public void load(String filePath) throws Exception {
Scanner input = new Scanner(new File(filePath));
int numStreams, nMllrClass;
int[] vectorLength = new int[1];
nMllrClass = input.nextInt();
assert nMllrClass == 1;
numStreams = input.nextInt();
this.As = new float[nMllrClass][][][];
this.Bs = new float[nMllrClass][][];
for (int i = 0; i < numStreams; i++) {
vectorLength[i] = input.nextInt();
int length = vectorLength[i];
this.As[0] = new float[numStreams][length][length];
this.Bs[0] = new float[numStreams][length];
for (int j = 0; j < length; j++) {
for (int k = 0; k < length; ++k) {
As[0][i][j][k] = input.nextFloat();
}
}
for (int j = 0; j < length; j++) {
Bs[0][i][j] = input.nextFloat();
}
}
input.close();
}
/**
* Stores in current object a transform generated on the provided stats.
*
* @param stats
* provided stats that were previously collected from Result
* objects.
*/
public void update(Stats stats) {
stats.fillRegLowerPart();
As = new float[nrOfClusters][][][];
Bs = new float[nrOfClusters][][];
this.computeMllrTransforms(stats.getRegLs(), stats.getRegRs());
}
}

View file

@ -1,71 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.pruner;
import edu.cmu.sphinx.decoder.search.ActiveList;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
/** A Null pruner. Does no actual pruning */
public class NullPruner implements Pruner {
/* (non-Javadoc)
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
public void newProperties(PropertySheet ps) throws PropertyException {
}
/** Creates a simple pruner */
public NullPruner() {
}
/** starts the pruner */
public void startRecognition() {
}
/**
* prunes the given set of states
*
* @param activeList the active list of tokens
* @return the pruned (and possibly new) activeList
*/
public ActiveList prune(ActiveList activeList) {
return activeList;
}
/** Performs post-recognition cleanup. */
public void stopRecognition() {
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.pruner.Pruner#allocate()
*/
public void allocate() {
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.pruner.Pruner#deallocate()
*/
public void deallocate() {
}
}

View file

@ -1,49 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.pruner;
import edu.cmu.sphinx.decoder.search.ActiveList;
import edu.cmu.sphinx.util.props.Configurable;
/** Provides a mechanism for pruning a set of StateTokens */
public interface Pruner extends Configurable {
/** Starts the pruner */
public void startRecognition();
/**
* prunes the given set of states
*
* @param stateTokenList a list containing StateToken objects to be scored
* @return the pruned list, (may be the sample list as stateTokenList)
*/
public ActiveList prune(ActiveList stateTokenList);
/** Performs post-recognition cleanup. */
public void stopRecognition();
/** Allocates resources necessary for this pruner */
public void allocate();
/** Deallocates resources necessary for this pruner */
public void deallocate();
}

View file

@ -1,80 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.pruner;
import edu.cmu.sphinx.decoder.search.ActiveList;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
/** Performs the default pruning behavior which is to invoke the purge on the active list */
public class SimplePruner implements Pruner {
private String name;
/* (non-Javadoc)
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
public void newProperties(PropertySheet ps) throws PropertyException {
}
public SimplePruner() {
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.util.props.Configurable#getName()
*/
public String getName() {
return name;
}
/** Starts the pruner */
public void startRecognition() {
}
/**
* prunes the given set of states
*
* @param activeList a activeList of tokens
*/
public ActiveList prune(ActiveList activeList) {
return activeList.purge();
}
/** Performs post-recognition cleanup. */
public void stopRecognition() {
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.pruner.Pruner#allocate()
*/
public void allocate() {
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.pruner.Pruner#deallocate()
*/
public void deallocate() {
}
}

View file

@ -1,57 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.util.props.Configurable;
import java.util.List;
/** Provides a mechanism for scoring a set of HMM states */
public interface AcousticScorer extends Configurable {
/** Allocates resources for this scorer */
public void allocate();
/** Deallocates resources for this scorer */
public void deallocate();
/** starts the scorer */
public void startRecognition();
/** stops the scorer */
public void stopRecognition();
/**
* Scores the given set of states over previously stored acoustic data if any or a new one
*
* @param scorableList a list containing Scoreable objects to be scored
* @return the best scoring scoreable, or null if there are no more frames to score
*/
public Data calculateScores(List<? extends Scoreable> scorableList);
/**
* Scores the given set of states over previously acoustic data from frontend
* and stores latter in the queue
*
* @param scorableList a list containing Scoreable objects to be scored
* @return the best scoring scoreable, or null if there are no more frames to score
*/
public Data calculateScoresAndStoreData(List<? extends Scoreable> scorableList);
}

View file

@ -1,67 +0,0 @@
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.decoder.search.SimpleBreadthFirstSearchManager;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Component;
import java.util.List;
import java.util.logging.Logger;
/**
* Normalizes a set of Tokens against the best scoring Token of a background model.
*
* @author Holger Brandl
*/
public class BackgroundModelNormalizer implements ScoreNormalizer {
/**
* The active list provider used to determined the best token for normalization. If this reference is not defined no
* normalization will be applied.
*/
@S4Component(type = SimpleBreadthFirstSearchManager.class, mandatory = false)
public static final String ACTIVE_LIST_PROVIDER = "activeListProvider";
private SimpleBreadthFirstSearchManager activeListProvider;
private Logger logger;
public BackgroundModelNormalizer() {
}
public void newProperties(PropertySheet ps) throws PropertyException {
this.activeListProvider = (SimpleBreadthFirstSearchManager) ps.getComponent(ACTIVE_LIST_PROVIDER);
this.logger = ps.getLogger();
logger.warning("no active list set.");
}
/**
* @param activeListProvider The active list provider used to determined the best token for normalization. If this reference is not defined no
* normalization will be applied.
*/
public BackgroundModelNormalizer(SimpleBreadthFirstSearchManager activeListProvider) {
this.activeListProvider = activeListProvider;
this.logger = Logger.getLogger(getClass().getName());
logger.warning("no active list set.");
}
public Scoreable normalize(List<? extends Scoreable> scoreableList, Scoreable bestToken) {
if (activeListProvider == null) {
return bestToken;
}
Token normToken = activeListProvider.getActiveList().getBestToken();
float normScore = normToken.getScore();
for (Scoreable scoreable : scoreableList) {
if (scoreable instanceof Token) {
scoreable.normalizeScore(normScore);
}
}
return bestToken;
}
}

View file

@ -1,30 +0,0 @@
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import java.util.List;
/**
* Performs a simple normalization of all token-scores by
*
* @author Holger Brandl
*/
public class MaxScoreNormalizer implements ScoreNormalizer {
public void newProperties(PropertySheet ps) throws PropertyException {
}
public MaxScoreNormalizer() {
}
public Scoreable normalize(List<? extends Scoreable> scoreableList, Scoreable bestToken) {
for (Scoreable scoreable : scoreableList) {
scoreable.normalizeScore(bestToken.getScore());
}
return bestToken;
}
}

View file

@ -1,27 +0,0 @@
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.util.props.Configurable;
import java.util.List;
/**
* Describes all API-elements that are necessary to normalize token-scores after these have been computed by an
* AcousticScorer.
*
* @author Holger Brandl
* @see edu.cmu.sphinx.decoder.scorer.AcousticScorer
* @see edu.cmu.sphinx.decoder.search.Token
*/
public interface ScoreNormalizer extends Configurable {
/**
* Normalizes the scores of a set of Tokens.
*
* @param scoreableList The set of scores to be normalized
* @param bestToken The best scoring Token of the above mentioned list. Although not strictly necessary it's
* included because of convenience reasons and to reduce computational overhead.
* @return The best token after the all <code>Token</code>s have been normalized. In most cases normalization won't
* change the order but to keep the API open for any kind of approach it seemed reasonable to include this.
*/
Scoreable normalize(List<? extends Scoreable> scoreableList, Scoreable bestToken);
}

View file

@ -1,35 +0,0 @@
/*
* Copyright 1999-2010 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.frontend.Data;
/** Thing that can provide the score */
public interface ScoreProvider {
/**
* Provides the score
*
* @param data data to score
* @return the score
*/
public float getScore(Data data);
/**
* Provides component score
*
* @param feature data to score
* @return the score
*/
public float[] getComponentScore(Data feature);
}

View file

@ -1,68 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.frontend.Data;
import java.util.Comparator;
/** Represents an entity that can be scored against a data */
public interface Scoreable extends Data {
/**
* A {@code Scoreable} comparator that is used to order scoreables according to their score,
* in descending order.
*
* <p>Note: since a higher score results in a lower natural order,
* statements such as {@code Collections.min(list, Scoreable.COMPARATOR)}
* actually return the Scoreable with the <b>highest</b> score,
* in contrast to the natural meaning of the word "min".
*/
Comparator<Scoreable> COMPARATOR = new Comparator<Scoreable>() {
public int compare(Scoreable t1, Scoreable t2) {
if (t1.getScore() > t2.getScore()) {
return -1;
} else if (t1.getScore() == t2.getScore()) {
return 0;
} else {
return 1;
}
}
};
/**
* Calculates a score against the given data. The score can be retrieved with get score
*
* @param data the data to be scored
* @return the score for the data
*/
public float calculateScore(Data data);
/**
* Retrieves a previously calculated (and possibly normalized) score
*
* @return the score
*/
public float getScore();
/**
* Normalizes a previously calculated score
*
* @param maxScore maximum score to use for norm
* @return the normalized score
*/
public float normalizeScore(float maxScore);
}

View file

@ -1,194 +0,0 @@
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.frontend.*;
import edu.cmu.sphinx.frontend.endpoint.SpeechEndSignal;
import edu.cmu.sphinx.frontend.util.DataUtil;
import edu.cmu.sphinx.util.props.ConfigurableAdapter;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Component;
import java.util.LinkedList;
import java.util.List;
/**
* Implements some basic scorer functionality, including a simple default
* acoustic scoring implementation which scores within the current thread, that
* can be changed by overriding the {@link #doScoring} method.
*
* <p>
* Note that all scores are maintained in LogMath log base.
*
* @author Holger Brandl
*/
public class SimpleAcousticScorer extends ConfigurableAdapter implements AcousticScorer {
/** Property the defines the frontend to retrieve features from for scoring */
@S4Component(type = BaseDataProcessor.class)
public final static String FEATURE_FRONTEND = "frontend";
protected BaseDataProcessor frontEnd;
/**
* An optional post-processor for computed scores that will normalize
* scores. If not set, no normalization will applied and the token scores
* will be returned unchanged.
*/
@S4Component(type = ScoreNormalizer.class, mandatory = false)
public final static String SCORE_NORMALIZER = "scoreNormalizer";
protected ScoreNormalizer scoreNormalizer;
private LinkedList<Data> storedData;
private boolean seenEnd = false;
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
this.frontEnd = (BaseDataProcessor) ps.getComponent(FEATURE_FRONTEND);
this.scoreNormalizer = (ScoreNormalizer) ps.getComponent(SCORE_NORMALIZER);
storedData = new LinkedList<Data>();
}
/**
* @param frontEnd
* the frontend to retrieve features from for scoring
* @param scoreNormalizer
* optional post-processor for computed scores that will
* normalize scores. If not set, no normalization will applied
* and the token scores will be returned unchanged.
*/
public SimpleAcousticScorer(BaseDataProcessor frontEnd, ScoreNormalizer scoreNormalizer) {
initLogger();
this.frontEnd = frontEnd;
this.scoreNormalizer = scoreNormalizer;
storedData = new LinkedList<Data>();
}
public SimpleAcousticScorer() {
}
/**
* Scores the given set of states.
*
* @param scoreableList
* A list containing scoreable objects to be scored
* @return The best scoring scoreable, or <code>null</code> if there are no
* more features to score
*/
public Data calculateScores(List<? extends Scoreable> scoreableList) {
Data data;
if (storedData.isEmpty()) {
while ((data = getNextData()) instanceof Signal) {
if (data instanceof SpeechEndSignal) {
seenEnd = true;
break;
}
if (data instanceof DataEndSignal) {
if (seenEnd)
return null;
else
break;
}
}
if (data == null)
return null;
} else {
data = storedData.poll();
}
return calculateScoresForData(scoreableList, data);
}
public Data calculateScoresAndStoreData(List<? extends Scoreable> scoreableList) {
Data data;
while ((data = getNextData()) instanceof Signal) {
if (data instanceof SpeechEndSignal) {
seenEnd = true;
break;
}
if (data instanceof DataEndSignal) {
if (seenEnd)
return null;
else
break;
}
}
if (data == null)
return null;
storedData.add(data);
return calculateScoresForData(scoreableList, data);
}
protected Data calculateScoresForData(List<? extends Scoreable> scoreableList, Data data) {
if (data instanceof SpeechEndSignal || data instanceof DataEndSignal) {
return data;
}
if (scoreableList.isEmpty())
return null;
// convert the data to FloatData if not yet done
if (data instanceof DoubleData)
data = DataUtil.DoubleData2FloatData((DoubleData) data);
Scoreable bestToken = doScoring(scoreableList, data);
// apply optional score normalization
if (scoreNormalizer != null && bestToken instanceof Token)
bestToken = scoreNormalizer.normalize(scoreableList, bestToken);
return bestToken;
}
protected Data getNextData() {
Data data = frontEnd.getData();
return data;
}
public void startRecognition() {
storedData.clear();
}
public void stopRecognition() {
// nothing needs to be done here
}
/**
* Scores a a list of <code>Scoreable</code>s given a <code>Data</code>
* -object.
*
* @param scoreableList
* The list of Scoreables to be scored
* @param data
* The <code>Data</code>-object to be used for scoring.
* @param <T> type for scorables
* @return the best scoring <code>Scoreable</code> or <code>null</code> if
* the list of scoreables was empty.
*/
protected <T extends Scoreable> T doScoring(List<T> scoreableList, Data data) {
T best = null;
float bestScore = -Float.MAX_VALUE;
for (T item : scoreableList) {
item.calculateScore(data);
if (item.getScore() > bestScore) {
bestScore = item.getScore();
best = item;
}
}
return best;
}
// Even if we don't do any meaningful allocation here, we implement the
// methods because most extending scorers do need them either.
public void allocate() {
}
public void deallocate() {
}
}

View file

@ -1,200 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.scorer;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.frontend.BaseDataProcessor;
import edu.cmu.sphinx.frontend.DataProcessingException;
import edu.cmu.sphinx.util.CustomThreadFactory;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Boolean;
import edu.cmu.sphinx.util.props.S4Integer;
import java.util.*;
import java.util.concurrent.*;
/**
* An acoustic scorer that breaks the scoring up into a configurable number of separate threads.
* <p>
* All scores are maintained in LogMath log base
*/
public class ThreadedAcousticScorer extends SimpleAcousticScorer {
/**
* The property that controls the thread priority of scoring threads.
* Must be a value between {@link Thread#MIN_PRIORITY} and {@link Thread#MAX_PRIORITY}, inclusive.
* The default is {@link Thread#NORM_PRIORITY}.
*/
@S4Integer(defaultValue = Thread.NORM_PRIORITY)
public final static String PROP_THREAD_PRIORITY = "threadPriority";
/**
* The property that controls the number of threads that are used to score HMM states. If the isCpuRelative
* property is false, then is is the exact number of threads that are used to score HMM states. If the isCpuRelative
* property is true, then this value is combined with the number of available processors on the system. If you want
* to have one thread per CPU available to score states, set the NUM_THREADS property to 0 and the isCpuRelative to
* true. If you want exactly one thread to process scores set NUM_THREADS to 1 and isCpuRelative to false.
* <p>
* If the value is 1 isCpuRelative is false no additional thread will be instantiated, and all computation will be
* done in the calling thread itself. The default value is 0.
*/
@S4Integer(defaultValue = 0)
public final static String PROP_NUM_THREADS = "numThreads";
/**
* The property that controls whether the number of available CPUs on the system is used when determining
* the number of threads to use for scoring. If true, the NUM_THREADS property is combined with the available number
* of CPUS to determine the number of threads. Note that the number of threads is contained to be never lower than
* zero. Also, if the number of threads is 0, the states are scored on the calling thread, no separate threads are
* started. The default value is false.
*/
@S4Boolean(defaultValue = true)
public final static String PROP_IS_CPU_RELATIVE = "isCpuRelative";
/**
* The property that controls the minimum number of scoreables sent to a thread. This is used to prevent
* over threading of the scoring that could happen if the number of threads is high compared to the size of the
* active list. The default is 50
*/
@S4Integer(defaultValue = 10)
public final static String PROP_MIN_SCOREABLES_PER_THREAD = "minScoreablesPerThread";
private final static String className = ThreadedAcousticScorer.class.getSimpleName();
private int numThreads; // number of threads in use
private int threadPriority;
private int minScoreablesPerThread; // min scoreables sent to a thread
private ExecutorService executorService;
/**
* @param frontEnd
* the frontend to retrieve features from for scoring
* @param scoreNormalizer
* optional post-processor for computed scores that will
* normalize scores. If not set, no normalization will applied
* and the token scores will be returned unchanged.
* @param minScoreablesPerThread
* the number of threads that are used to score HMM states. If
* the isCpuRelative property is false, then is is the exact
* number of threads that are used to score HMM states. If the
* isCpuRelative property is true, then this value is combined
* with the number of available processors on the system. If you
* want to have one thread per CPU available to score states, set
* the NUM_THREADS property to 0 and the isCpuRelative to true.
* If you want exactly one thread to process scores set
* NUM_THREADS to 1 and isCpuRelative to false.
* <p>
* If the value is 1 isCpuRelative is false no additional thread
* will be instantiated, and all computation will be done in the
* calling thread itself. The default value is 0.
* @param cpuRelative
* controls whether the number of available CPUs on the system is
* used when determining the number of threads to use for
* scoring. If true, the NUM_THREADS property is combined with
* the available number of CPUS to determine the number of
* threads. Note that the number of threads is constrained to be
* never lower than zero. Also, if the number of threads is 0,
* the states are scored on the calling thread, no separate
* threads are started. The default value is false.
* @param numThreads
* the minimum number of scoreables sent to a thread. This is
* used to prevent over threading of the scoring that could
* happen if the number of threads is high compared to the size
* of the active list. The default is 50
* @param threadPriority
* the thread priority of scoring threads. Must be a value between
* {@link Thread#MIN_PRIORITY} and {@link Thread#MAX_PRIORITY}, inclusive.
* The default is {@link Thread#NORM_PRIORITY}.
*/
public ThreadedAcousticScorer(BaseDataProcessor frontEnd, ScoreNormalizer scoreNormalizer,
int minScoreablesPerThread, boolean cpuRelative, int numThreads, int threadPriority) {
super(frontEnd, scoreNormalizer);
init(minScoreablesPerThread, cpuRelative, numThreads, threadPriority);
}
public ThreadedAcousticScorer() {
}
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
init(ps.getInt(PROP_MIN_SCOREABLES_PER_THREAD), ps.getBoolean(PROP_IS_CPU_RELATIVE),
ps.getInt(PROP_NUM_THREADS), ps.getInt(PROP_THREAD_PRIORITY));
}
private void init(int minScoreablesPerThread, boolean cpuRelative, int numThreads, int threadPriority) {
this.minScoreablesPerThread = minScoreablesPerThread;
if (cpuRelative) {
numThreads += Runtime.getRuntime().availableProcessors();
}
this.numThreads = numThreads;
this.threadPriority = threadPriority;
}
@Override
public void allocate() {
super.allocate();
if (executorService == null) {
if (numThreads > 1) {
logger.fine("# of scoring threads: " + numThreads);
executorService = Executors.newFixedThreadPool(numThreads,
new CustomThreadFactory(className, true, threadPriority));
} else {
logger.fine("no scoring threads");
}
}
}
@Override
public void deallocate() {
super.deallocate();
if (executorService != null) {
executorService.shutdown();
executorService = null;
}
}
@Override
protected <T extends Scoreable> T doScoring(List<T> scoreableList, final Data data) {
if (numThreads > 1) {
int totalSize = scoreableList.size();
int jobSize = Math.max((totalSize + numThreads - 1) / numThreads, minScoreablesPerThread);
if (jobSize < totalSize) {
List<Callable<T>> tasks = new ArrayList<Callable<T>>();
for (int from = 0, to = jobSize; from < totalSize; from = to, to += jobSize) {
final List<T> scoringJob = scoreableList.subList(from, Math.min(to, totalSize));
tasks.add(new Callable<T>() {
public T call() throws Exception {
return ThreadedAcousticScorer.super.doScoring(scoringJob, data);
}
});
}
List<T> finalists = new ArrayList<T>(tasks.size());
try {
for (Future<T> result : executorService.invokeAll(tasks))
finalists.add(result.get());
} catch (Exception e) {
throw new DataProcessingException("No scoring jobs ended", e);
}
return Collections.min(finalists, Scoreable.COMPARATOR);
}
}
// if no additional threads are necessary, do the scoring in the calling thread
return super.doScoring(scoreableList, data);
}
}

View file

@ -1,117 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import java.util.List;
import edu.cmu.sphinx.util.props.*;
/**
* An active list is maintained as a sorted list
* <p>
* Note that all scores are represented in LogMath logbase
*/
public interface ActiveList extends Iterable<Token> {
/**
* property that sets the desired (or target) size for this active list. This is sometimes referred to as the beam
* size
*/
@S4Integer(defaultValue = 2000)
public final static String PROP_ABSOLUTE_BEAM_WIDTH = "absoluteBeamWidth";
/**
* Property that sets the minimum score relative to the maximum score in the list for pruning. Tokens with a score
* less than relativeBeamWidth * maximumScore will be pruned from the list
*/
@S4Double(defaultValue = 0.0)
public final static String PROP_RELATIVE_BEAM_WIDTH = "relativeBeamWidth";
/**
* Property that indicates whether or not the active list will implement 'strict pruning'. When strict pruning is
* enabled, the active list will not remove tokens from the active list until they have been completely scored. If
* strict pruning is not enabled, tokens can be removed from the active list based upon their entry scores. The
* default setting is false (disabled).
*/
@S4Boolean(defaultValue = true)
public final static String PROP_STRICT_PRUNING = "strictPruning";
/**
* Adds the given token to the list, keeping track of the lowest scoring token
*
* @param token the token to add
*/
public void add(Token token);
/**
* Purges the active list of excess members returning a (potentially new) active list
*
* @return a purged active list
*/
public ActiveList purge();
/**
* Returns the size of this list
*
* @return the size
*/
public int size();
/**
* Gets the list of all tokens
*
* @return the set of tokens
*/
public List<Token> getTokens();
/**
* gets the beam threshold best upon the best scoring token
*
* @return the beam threshold
*/
public float getBeamThreshold();
/**
* gets the best score in the list
*
* @return the best score
*/
public float getBestScore();
/**
* Sets the best scoring token for this active list
*
* @param token the best scoring token
*/
public void setBestToken(Token token);
/**
* Gets the best scoring token for this active list
*
* @return the best scoring token
*/
public Token getBestToken();
/**
* Creates a new empty version of this active list with the same general properties.
*
* @return a new active list.
*/
public ActiveList newInstance();
}

View file

@ -1,79 +0,0 @@
/*
*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.util.LogMath;
import edu.cmu.sphinx.util.props.*;
/** Creates new active lists. */
public abstract class ActiveListFactory implements Configurable {
/**
* property that sets the desired (or target) size for this active list. This is sometimes referred to as the beam
* size
*/
@S4Integer(defaultValue = -1)
public final static String PROP_ABSOLUTE_BEAM_WIDTH = "absoluteBeamWidth";
/**
* Property that sets the minimum score relative to the maximum score in the list for pruning. Tokens with a score
* less than relativeBeamWidth * maximumScore will be pruned from the list
*/
@S4Double(defaultValue = 1E-80)
public final static String PROP_RELATIVE_BEAM_WIDTH = "relativeBeamWidth";
/**
* Property that indicates whether or not the active list will implement 'strict pruning'. When strict pruning is
* enabled, the active list will not remove tokens from the active list until they have been completely scored. If
* strict pruning is not enabled, tokens can be removed from the active list based upon their entry scores. The
* default setting is false (disabled).
*/
@S4Boolean(defaultValue = true)
public final static String PROP_STRICT_PRUNING = "strictPruning";
protected LogMath logMath;
protected int absoluteBeamWidth;
protected float logRelativeBeamWidth;
/**
*
* @param absoluteBeamWidth beam for absolute pruning
* @param relativeBeamWidth beam for relative pruning
*/
public ActiveListFactory(int absoluteBeamWidth,double relativeBeamWidth){
logMath = LogMath.getLogMath();
this.absoluteBeamWidth = absoluteBeamWidth;
this.logRelativeBeamWidth = logMath.linearToLog(relativeBeamWidth);
}
public ActiveListFactory() {
}
public void newProperties(PropertySheet ps) throws PropertyException {
logMath = LogMath.getLogMath();
absoluteBeamWidth = ps.getInt(PROP_ABSOLUTE_BEAM_WIDTH);
double relativeBeamWidth = ps.getDouble(PROP_RELATIVE_BEAM_WIDTH);
logRelativeBeamWidth = logMath.linearToLog(relativeBeamWidth);
}
/**
* Creates a new active list of a particular type
*
* @return the active list
*/
public abstract ActiveList newInstance();
}

View file

@ -1,77 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.util.props.Configurable;
import edu.cmu.sphinx.util.props.S4Double;
import edu.cmu.sphinx.util.props.S4Integer;
import java.util.Iterator;
/** An active list is maintained as a sorted list */
public interface ActiveListManager extends Configurable {
/** The property that specifies the absolute word beam width */
@S4Integer(defaultValue = 2000)
public final static String PROP_ABSOLUTE_WORD_BEAM_WIDTH =
"absoluteWordBeamWidth";
/** The property that specifies the relative word beam width */
@S4Double(defaultValue = 0.0)
public final static String PROP_RELATIVE_WORD_BEAM_WIDTH =
"relativeWordBeamWidth";
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token);
/**
* Returns an Iterator of all the non-emitting ActiveLists. The iteration order is the same as the search state
* order.
*
* @return an Iterator of non-emitting ActiveLists
*/
public Iterator<ActiveList> getNonEmittingListIterator();
/**
* Returns the emitting ActiveList from the manager
*
* @return the emitting ActiveList
*/
public ActiveList getEmittingList();
/**
* Clears emitting list in manager
*/
public void clearEmittingList();
/** Dumps out debug info for the active list manager */
public void dump();
/**
* Sets the total number of state types to be managed
*
* @param numStateOrder the total number of state types
*/
public void setNumStateOrder(int numStateOrder);
}

View file

@ -1,87 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import java.util.*;
/**
* Manager for pruned hypothesis
*
* @author Joe Woelfel
*/
public class AlternateHypothesisManager {
private final Map<Token, List<Token>> viterbiLoserMap = new HashMap<Token, List<Token>>();
private final int maxEdges;
/**
* Creates an alternate hypotheses manager
*
* @param maxEdges the maximum edges allowed
*/
public AlternateHypothesisManager(int maxEdges) {
this.maxEdges = maxEdges;
}
/**
* Collects adds alternate predecessors for a token that would have lost because of viterbi.
*
* @param token - a token that has an alternate lower scoring predecessor that still might be of interest
* @param predecessor - a predecessor that scores lower than token.getPredecessor().
*/
public void addAlternatePredecessor(Token token, Token predecessor) {
assert predecessor != token.getPredecessor();
List<Token> list = viterbiLoserMap.get(token);
if (list == null) {
list = new ArrayList<Token>();
viterbiLoserMap.put(token, list);
}
list.add(predecessor);
}
/**
* Returns a list of alternate predecessors for a token.
*
* @param token - a token that may have alternate lower scoring predecessor that still might be of interest
* @return A list of predecessors that scores lower than token.getPredecessor().
*/
public List<Token> getAlternatePredecessors(Token token) {
return viterbiLoserMap.get(token);
}
/** Purge all but max number of alternate preceding token hypotheses. */
public void purge() {
int max = maxEdges - 1;
for (Map.Entry<Token, List<Token>> entry : viterbiLoserMap.entrySet()) {
List<Token> list = entry.getValue();
Collections.sort(list, Scoreable.COMPARATOR);
List<Token> newList = list.subList(0, list.size() > max ? max : list.size());
viterbiLoserMap.put(entry.getKey(), newList);
}
}
public boolean hasAlternatePredecessors(Token token) {
return viterbiLoserMap.containsKey(token);
}
}

View file

@ -1,270 +0,0 @@
/*
*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
/** A factory for PartitionActiveLists */
public class PartitionActiveListFactory extends ActiveListFactory {
/**
*
* @param absoluteBeamWidth beam for absolute pruning
* @param relativeBeamWidth beam for relative pruning
*/
public PartitionActiveListFactory(int absoluteBeamWidth, double relativeBeamWidth) {
super(absoluteBeamWidth, relativeBeamWidth);
}
public PartitionActiveListFactory() {
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.ActiveListFactory#newInstance()
*/
@Override
public ActiveList newInstance() {
return new PartitionActiveList(absoluteBeamWidth, logRelativeBeamWidth);
}
/**
* An active list that does absolute beam with pruning by partitioning the
* token list based on absolute beam width, instead of sorting the token
* list, and then chopping the list up with the absolute beam width. The
* expected run time of this partitioning algorithm is O(n), instead of O(n log n)
* for merge sort.
* <p>
* This class is not thread safe and should only be used by a single thread.
* <p>
* Note that all scores are maintained in the LogMath log base.
*/
class PartitionActiveList implements ActiveList {
private int size;
private final int absoluteBeamWidth;
private final float logRelativeBeamWidth;
private Token bestToken;
// when the list is changed these things should be
// changed/updated as well
private Token[] tokenList;
private final Partitioner partitioner = new Partitioner();
/** Creates an empty active list
* @param absoluteBeamWidth beam for absolute pruning
* @param logRelativeBeamWidth beam for relative pruning
*/
public PartitionActiveList(int absoluteBeamWidth,
float logRelativeBeamWidth) {
this.absoluteBeamWidth = absoluteBeamWidth;
this.logRelativeBeamWidth = logRelativeBeamWidth;
int listSize = 2000;
if (absoluteBeamWidth > 0) {
listSize = absoluteBeamWidth / 3;
}
this.tokenList = new Token[listSize];
}
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token) {
if (size < tokenList.length) {
tokenList[size] = token;
size++;
} else {
// token array too small, double the capacity
doubleCapacity();
add(token);
}
if (bestToken == null || token.getScore() > bestToken.getScore()) {
bestToken = token;
}
}
/** Doubles the capacity of the Token array. */
private void doubleCapacity() {
tokenList = Arrays.copyOf(tokenList, tokenList.length * 2);
}
/**
* Purges excess members. Remove all nodes that fall below the relativeBeamWidth
*
* @return a (possible new) active list
*/
public ActiveList purge() {
// if the absolute beam is zero, this means there
// should be no constraint on the abs beam size at all
// so we will only be relative beam pruning, which means
// that we don't have to sort the list
if (absoluteBeamWidth > 0) {
// if we have an absolute beam, then we will
// need to sort the tokens to apply the beam
if (size > absoluteBeamWidth) {
size = partitioner.partition(tokenList, size,
absoluteBeamWidth) + 1;
}
}
return this;
}
/**
* gets the beam threshold best upon the best scoring token
*
* @return the beam threshold
*/
public float getBeamThreshold() {
return getBestScore() + logRelativeBeamWidth;
}
/**
* gets the best score in the list
*
* @return the best score
*/
public float getBestScore() {
float bestScore = -Float.MAX_VALUE;
if (bestToken != null) {
bestScore = bestToken.getScore();
}
// A sanity check
// for (Token t : this) {
// if (t.getScore() > bestScore) {
// System.out.println("GBS: found better score "
// + t + " vs. " + bestScore);
// }
// }
return bestScore;
}
/**
* Sets the best scoring token for this active list
*
* @param token the best scoring token
*/
public void setBestToken(Token token) {
bestToken = token;
}
/**
* Gets the best scoring token for this active list
*
* @return the best scoring token
*/
public Token getBestToken() {
return bestToken;
}
/**
* Retrieves the iterator for this tree.
*
* @return the iterator for this token list
*/
public Iterator<Token> iterator() {
return (new TokenArrayIterator(tokenList, size));
}
/**
* Gets the list of all tokens
*
* @return the list of tokens
*/
public List<Token> getTokens() {
return Arrays.asList(tokenList).subList(0, size);
}
/**
* Returns the number of tokens on this active list
*
* @return the size of the active list
*/
public final int size() {
return size;
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.search.ActiveList#createNew()
*/
public ActiveList newInstance() {
return PartitionActiveListFactory.this.newInstance();
}
}
}
class TokenArrayIterator implements Iterator<Token> {
private final Token[] tokenArray;
private final int size;
private int pos;
TokenArrayIterator(Token[] tokenArray, int size) {
this.tokenArray = tokenArray;
this.pos = 0;
this.size = size;
}
/** Returns true if the iteration has more tokens. */
public boolean hasNext() {
return pos < size;
}
/** Returns the next token in the iteration. */
public Token next() throws NoSuchElementException {
if (pos >= tokenArray.length) {
throw new NoSuchElementException();
}
return tokenArray[pos++];
}
/** Unimplemented, throws an Error if called. */
public void remove() {
throw new Error("TokenArrayIterator.remove() unimplemented");
}
}

View file

@ -1,180 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import java.util.Arrays;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
/**
* Partitions a list of tokens according to the token score, used
* in {@link PartitionActiveListFactory}. This method is supposed
* to provide O(n) performance so it's more preferable than
*/
public class Partitioner {
/** Max recursion depth **/
final private int MAX_DEPTH = 50;
/**
* Partitions sub-array of tokens around the end token.
* Put all elements less or equal then pivot to the start of the array,
* shifting new pivot position
*
* @param tokens the token array to partition
* @param start the starting index of the subarray
* @param end the pivot and the ending index of the subarray, inclusive
* @return the index (after partitioning) of the element around which the array is partitioned
*/
private int endPointPartition(Token[] tokens, int start, int end) {
Token pivot = tokens[end];
float pivotScore = pivot.getScore();
int i = start;
int j = end - 1;
while (true) {
while (i < end && tokens[i].getScore() >= pivotScore)
i++;
while (j > i && tokens[j].getScore() < pivotScore)
j--;
if (j <= i)
break;
Token current = tokens[j];
setToken(tokens, j, tokens[i]);
setToken(tokens, i, current);
}
setToken(tokens, end, tokens[i]);
setToken(tokens, i, pivot);
return i;
}
/**
* Partitions sub-array of tokens around the x-th token by selecting the midpoint of the token array as the pivot.
* Partially solves issues with slow performance on already sorted arrays.
*
* @param tokens the token array to partition
* @param start the starting index of the subarray
* @param end the ending index of the subarray, inclusive
* @return the index of the element around which the array is partitioned
*/
private int midPointPartition(Token[] tokens, int start, int end) {
int middle = (start + end) >>> 1;
Token temp = tokens[end];
setToken(tokens, end, tokens[middle]);
setToken(tokens, middle, temp);
return endPointPartition(tokens, start, end);
}
/**
* Partitions the given array of tokens in place, so that the highest scoring n token will be at the beginning of
* the array, not in any order.
*
* @param tokens the array of tokens to partition
* @param size the number of tokens to partition
* @param n the number of tokens in the final partition
* @return the index of the last element in the partition
*/
public int partition(Token[] tokens, int size, int n) {
if (tokens.length > n) {
return midPointSelect(tokens, 0, size - 1, n, 0);
} else {
return findBest(tokens, size);
}
}
/**
* Simply find the best token and put it in the last slot
*
* @param tokens array of tokens
* @param size the number of tokens to partition
* @return index of the best token
*/
private int findBest(Token[] tokens, int size) {
int r = -1;
float lowestScore = Float.MAX_VALUE;
for (int i = 0; i < tokens.length; i++) {
float currentScore = tokens[i].getScore();
if (currentScore <= lowestScore) {
lowestScore = currentScore;
r = i; // "r" is the returned index
}
}
// exchange tokens[r] <=> last token,
// where tokens[r] has the lowest score
int last = size - 1;
if (last >= 0) {
Token lastToken = tokens[last];
setToken(tokens, last, tokens[r]);
setToken(tokens, r, lastToken);
}
// return the last index
return last;
}
private void setToken(Token[] list, int index, Token token) {
list[index] = token;
}
/**
* Selects the token with the ith largest token score.
*
* @param tokens the token array to partition
* @param start the starting index of the subarray
* @param end the ending index of the subarray, inclusive
* @param targetSize target size of the partition
* @param depth recursion depth to avoid stack overflow and fall back to simple partition.
* @return the index of the token with the ith largest score
*/
private int midPointSelect(Token[] tokens, int start, int end, int targetSize, int depth) {
if (depth > MAX_DEPTH) {
return simplePointSelect (tokens, start, end, targetSize);
}
if (start == end) {
return start;
}
int partitionToken = midPointPartition(tokens, start, end);
int newSize = partitionToken - start + 1;
if (targetSize == newSize) {
return partitionToken;
} else if (targetSize < newSize) {
return midPointSelect(tokens, start, partitionToken - 1, targetSize, depth + 1);
} else {
return midPointSelect(tokens, partitionToken + 1, end, targetSize - newSize, depth + 1);
}
}
/**
* Fallback method to get the partition
*
* @param tokens the token array to partition
* @param start the starting index of the subarray
* @param end the ending index of the subarray, inclusive
* @param targetSize target size of the partition
* @return the index of the token with the ith largest score
*/
private int simplePointSelect(Token[] tokens, int start, int end, int targetSize) {
Arrays.sort(tokens, start, end + 1, Scoreable.COMPARATOR);
return start + targetSize - 1;
}
}

View file

@ -1,64 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.props.Configurable;
/**
* Defines the interface for the SearchManager. The SearchManager's primary role is to execute the search for a given
* number of frames. The SearchManager will return interim results as the recognition proceeds and when recognition
* completes a final result will be returned.
*/
public interface SearchManager extends Configurable {
/**
* Allocates the resources necessary for this search. This should be called once before an recognitions are
* performed
*/
public void allocate();
/**
* Deallocates resources necessary for this search. This should be called once after all recognitions are completed
* at the search manager is no longer needed.
*/
public void deallocate();
/**
* Prepares the SearchManager for recognition. This method must be called before <code> recognize </code> is
* called. Typically, <code> start </code> and <code> stop </code> are called bracketing an utterance.
*/
public void startRecognition();
/** Performs post-recognition cleanup. This method should be called after recognize returns a final result. */
public void stopRecognition();
/**
* Performs recognition. Processes no more than the given number of frames before returning. This method returns a
* partial result after nFrames have been processed, or a final result if recognition completes while processing
* frames. If a final result is returned, the actual number of frames processed can be retrieved from the result.
* This method may block while waiting for frames to arrive.
*
* @param nFrames the maximum number of frames to process. A final result may be returned before all nFrames are
* processed.
* @return the recognition result, the result may be a partial or a final result; or return null if no frames are
* arrived
*/
public Result recognize(int nFrames);
}

View file

@ -1,222 +0,0 @@
/*
*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import java.util.Collections;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
/** A factory for simple active lists */
public class SimpleActiveListFactory extends ActiveListFactory {
/**
* Creates factory for simple active lists
* @param absoluteBeamWidth absolute pruning beam
* @param relativeBeamWidth relative pruning beam
*/
public SimpleActiveListFactory(int absoluteBeamWidth,
double relativeBeamWidth)
{
super(absoluteBeamWidth, relativeBeamWidth);
}
public SimpleActiveListFactory() {
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.ActiveListFactory#newInstance()
*/
@Override
public ActiveList newInstance() {
return new SimpleActiveList(absoluteBeamWidth, logRelativeBeamWidth);
}
/**
* An active list that tries to be simple and correct. This type of active list will be slow, but should exhibit
* correct behavior. Faster versions of the ActiveList exist (HeapActiveList, TreeActiveList).
* <p>
* This class is not thread safe and should only be used by a single thread.
* <p>
* Note that all scores are maintained in the LogMath log domain
*/
class SimpleActiveList implements ActiveList {
private int absoluteBeamWidth = 2000;
private final float logRelativeBeamWidth;
private Token bestToken;
private List<Token> tokenList = new LinkedList<Token>();
/**
* Creates an empty active list
*
* @param absoluteBeamWidth the absolute beam width
* @param logRelativeBeamWidth the relative beam width (in the log domain)
*/
public SimpleActiveList(int absoluteBeamWidth,
float logRelativeBeamWidth) {
this.absoluteBeamWidth = absoluteBeamWidth;
this.logRelativeBeamWidth = logRelativeBeamWidth;
}
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token) {
tokenList.add(token);
if (bestToken == null || token.getScore() > bestToken.getScore()) {
bestToken = token;
}
}
/**
* Replaces an old token with a new token
*
* @param oldToken the token to replace (or null in which case, replace works like add).
* @param newToken the new token to be placed in the list.
*/
public void replace(Token oldToken, Token newToken) {
add(newToken);
if (oldToken != null) {
if (!tokenList.remove(oldToken)) {
// Some optional debugging code here to dump out the paths
// when this "should never happen" error happens
// System.out.println("SimpleActiveList: remove "
// + oldToken + " missing, but replaced by "
// + newToken);
// oldToken.dumpTokenPath(true);
// newToken.dumpTokenPath(true);
}
}
}
/**
* Purges excess members. Remove all nodes that fall below the relativeBeamWidth
*
* @return a (possible new) active list
*/
public ActiveList purge() {
if (absoluteBeamWidth > 0 && tokenList.size() > absoluteBeamWidth) {
Collections.sort(tokenList, Scoreable.COMPARATOR);
tokenList = tokenList.subList(0, absoluteBeamWidth);
}
return this;
}
/**
* Retrieves the iterator for this tree.
*
* @return the iterator for this token list
*/
public Iterator<Token> iterator() {
return tokenList.iterator();
}
/**
* Gets the set of all tokens
*
* @return the set of tokens
*/
public List<Token> getTokens() {
return tokenList;
}
/**
* Returns the number of tokens on this active list
*
* @return the size of the active list
*/
public final int size() {
return tokenList.size();
}
/**
* gets the beam threshold best upon the best scoring token
*
* @return the beam threshold
*/
public float getBeamThreshold() {
return getBestScore() + logRelativeBeamWidth;
}
/**
* gets the best score in the list
*
* @return the best score
*/
public float getBestScore() {
float bestScore = -Float.MAX_VALUE;
if (bestToken != null) {
bestScore = bestToken.getScore();
}
return bestScore;
}
/**
* Sets the best scoring token for this active list
*
* @param token the best scoring token
*/
public void setBestToken(Token token) {
bestToken = token;
}
/**
* Gets the best scoring token for this active list
*
* @return the best scoring token
*/
public Token getBestToken() {
return bestToken;
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.search.ActiveList#createNew()
*/
public ActiveList newInstance() {
return SimpleActiveListFactory.this.newInstance();
}
}
}

View file

@ -1,244 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Boolean;
import edu.cmu.sphinx.util.props.S4ComponentList;
import java.util.Iterator;
import java.util.List;
import java.util.NoSuchElementException;
import java.util.logging.Logger;
/**
* A list of ActiveLists. Different token types are placed in different lists.
* <p>
* This class is not thread safe and should only be used by a single thread.
*/
public class SimpleActiveListManager implements ActiveListManager {
/**
* This property is used in the Iterator returned by the getNonEmittingListIterator() method. When the
* Iterator.next() method is called, this property determines whether the lists prior to that returned by next() are
* empty (they should be empty). If they are not empty, an Error will be thrown.
*/
@S4Boolean(defaultValue = false)
public static final String PROP_CHECK_PRIOR_LISTS_EMPTY = "checkPriorListsEmpty";
/** The property that defines the name of the active list factory to be used by this search manager. */
@S4ComponentList(type = ActiveListFactory.class)
public final static String PROP_ACTIVE_LIST_FACTORIES = "activeListFactories";
// --------------------------------------
// Configuration data
// --------------------------------------
private Logger logger;
private boolean checkPriorLists;
private List<ActiveListFactory> activeListFactories;
private ActiveList[] currentActiveLists;
/**
* Create a simple list manager
* @param activeListFactories factories
* @param checkPriorLists check prior lists during operation
*/
public SimpleActiveListManager(List<ActiveListFactory> activeListFactories, boolean checkPriorLists) {
this.logger = Logger.getLogger( getClass().getName() );
this.activeListFactories = activeListFactories;
this.checkPriorLists = checkPriorLists;
}
public SimpleActiveListManager() {
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
public void newProperties(PropertySheet ps) throws PropertyException {
logger = ps.getLogger();
activeListFactories = ps.getComponentList(PROP_ACTIVE_LIST_FACTORIES, ActiveListFactory.class);
checkPriorLists = ps.getBoolean(PROP_CHECK_PRIOR_LISTS_EMPTY);
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.ActiveListManager#setNumStateOrder(java.lang.Class[])
*/
public void setNumStateOrder(int numStateOrder) {
// check to make sure that we have the correct
// number of active list factories for the given search states
currentActiveLists = new ActiveList[numStateOrder];
if (activeListFactories.isEmpty()) {
logger.severe("No active list factories configured");
throw new Error("No active list factories configured");
}
if (activeListFactories.size() != currentActiveLists.length) {
logger.warning("Need " + currentActiveLists.length +
" active list factories, found " +
activeListFactories.size());
}
createActiveLists();
}
/**
* Creates the emitting and non-emitting active lists. When creating the non-emitting active lists, we will look at
* their respective beam widths (eg, word beam, unit beam, state beam).
*/
private void createActiveLists() {
int nlists = activeListFactories.size();
for (int i = 0; i < currentActiveLists.length; i++) {
int which = i;
if (which >= nlists) {
which = nlists - 1;
}
ActiveListFactory alf = activeListFactories.get(which);
currentActiveLists[i] = alf.newInstance();
}
}
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token) {
ActiveList activeList = findListFor(token);
if (activeList == null) {
throw new Error("Cannot find ActiveList for "
+ token.getSearchState().getClass());
}
activeList.add(token);
}
/**
* Given a token find the active list associated with the token type
*
* @param token
* @return the active list
*/
private ActiveList findListFor(Token token) {
return currentActiveLists[token.getSearchState().getOrder()];
}
/**
* Returns the emitting ActiveList from the manager
*
* @return the emitting ActiveList
*/
public ActiveList getEmittingList() {
ActiveList list = currentActiveLists[currentActiveLists.length - 1];
return list;
}
/**
* Clears emitting list in manager
*/
public void clearEmittingList() {
ActiveList list = currentActiveLists[currentActiveLists.length - 1];
currentActiveLists[currentActiveLists.length - 1] = list.newInstance();
}
/**
* Returns an Iterator of all the non-emitting ActiveLists. The iteration order is the same as the search state
* order.
*
* @return an Iterator of non-emitting ActiveLists
*/
public Iterator<ActiveList> getNonEmittingListIterator() {
return (new NonEmittingListIterator());
}
private class NonEmittingListIterator implements Iterator<ActiveList> {
private int listPtr;
public NonEmittingListIterator() {
listPtr = -1;
}
public boolean hasNext() {
return listPtr + 1 < currentActiveLists.length - 1;
}
public ActiveList next() throws NoSuchElementException {
listPtr++;
if (listPtr >= currentActiveLists.length) {
throw new NoSuchElementException();
}
if (checkPriorLists) {
checkPriorLists();
}
return currentActiveLists[listPtr];
}
/** Check that all lists prior to listPtr is empty. */
private void checkPriorLists() {
for (int i = 0; i < listPtr; i++) {
ActiveList activeList = currentActiveLists[i];
if (activeList.size() > 0) {
throw new Error("At while processing state order"
+ listPtr + ", state order " + i + " not empty");
}
}
}
public void remove() {
currentActiveLists[listPtr] =
currentActiveLists[listPtr].newInstance();
}
}
/** Outputs debugging info for this list manager */
public void dump() {
System.out.println("--------------------");
for (ActiveList al : currentActiveLists) {
dumpList(al);
}
}
/**
* Dumps out debugging info for the given active list
*
* @param al the active list to dump
*/
private void dumpList(ActiveList al) {
System.out.println("Size: " + al.size() + " Best token: " + al.getBestToken());
}
}

View file

@ -1,680 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.pruner.Pruner;
import edu.cmu.sphinx.decoder.scorer.AcousticScorer;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.linguist.Linguist;
import edu.cmu.sphinx.linguist.SearchState;
import edu.cmu.sphinx.linguist.SearchStateArc;
import edu.cmu.sphinx.linguist.WordSearchState;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.LogMath;
import edu.cmu.sphinx.util.StatisticsVariable;
import edu.cmu.sphinx.util.Timer;
import edu.cmu.sphinx.util.TimerPool;
import edu.cmu.sphinx.util.props.*;
import java.util.*;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.io.IOException;
/**
* Provides the breadth first search. To perform recognition an application should call initialize before recognition
* begins, and repeatedly call <code> recognize </code> until Result.isFinal() returns true. Once a final result has
* been obtained, <code> terminate </code> should be called.
* <p>
* All scores and probabilities are maintained in the log math log domain.
* <p>
* For information about breadth first search please refer to "Spoken Language Processing", X. Huang, PTR
*/
// TODO - need to add in timing code.
public class SimpleBreadthFirstSearchManager extends TokenSearchManager {
/** The property that defines the name of the linguist to be used by this search manager. */
@S4Component(type = Linguist.class)
public final static String PROP_LINGUIST = "linguist";
/** The property that defines the name of the linguist to be used by this search manager. */
@S4Component(type = Pruner.class)
public final static String PROP_PRUNER = "pruner";
/** The property that defines the name of the scorer to be used by this search manager. */
@S4Component(type = AcousticScorer.class)
public final static String PROP_SCORER = "scorer";
/** The property that defines the name of the active list factory to be used by this search manager. */
@S4Component(type = ActiveListFactory.class)
public final static String PROP_ACTIVE_LIST_FACTORY = "activeListFactory";
/**
* The property that when set to <code>true</code> will cause the recognizer to count up all the tokens in the
* active list after every frame.
*/
@S4Boolean(defaultValue = false)
public final static String PROP_SHOW_TOKEN_COUNT = "showTokenCount";
/**
* The property that sets the minimum score relative to the maximum score in the word list for pruning. Words with a
* score less than relativeBeamWidth * maximumScore will be pruned from the list
*/
@S4Double(defaultValue = 0.0)
public final static String PROP_RELATIVE_WORD_BEAM_WIDTH = "relativeWordBeamWidth";
/**
* The property that controls whether or not relative beam pruning will be performed on the entry into a
* state.
*/
@S4Boolean(defaultValue = false)
public final static String PROP_WANT_ENTRY_PRUNING = "wantEntryPruning";
/**
* The property that controls the number of frames processed for every time the decode growth step is skipped.
* Setting this property to zero disables grow skipping. Setting this number to a small integer will increase the
* speed of the decoder but will also decrease its accuracy. The higher the number, the less often the grow code is
* skipped.
*/
@S4Integer(defaultValue = 0)
public final static String PROP_GROW_SKIP_INTERVAL = "growSkipInterval";
protected Linguist linguist; // Provides grammar/language info
private Pruner pruner; // used to prune the active list
private AcousticScorer scorer; // used to score the active list
protected int currentFrameNumber; // the current frame number
protected long currentCollectTime; // the current frame number
protected ActiveList activeList; // the list of active tokens
protected List<Token> resultList; // the current set of results
protected LogMath logMath;
private Logger logger;
private String name;
// ------------------------------------
// monitoring data
// ------------------------------------
private Timer scoreTimer; // TODO move these timers out
private Timer pruneTimer;
protected Timer growTimer;
private StatisticsVariable totalTokensScored;
private StatisticsVariable tokensPerSecond;
private StatisticsVariable curTokensScored;
private StatisticsVariable tokensCreated;
private StatisticsVariable viterbiPruned;
private StatisticsVariable beamPruned;
// ------------------------------------
// Working data
// ------------------------------------
protected boolean showTokenCount;
private boolean wantEntryPruning;
protected Map<SearchState, Token> bestTokenMap;
private float logRelativeWordBeamWidth;
private int totalHmms;
private double startTime;
private float threshold;
private float wordThreshold;
private int growSkipInterval;
protected ActiveListFactory activeListFactory;
protected boolean streamEnd;
public SimpleBreadthFirstSearchManager() {
}
/**
* Creates a manager for simple search
*
* @param linguist linguist to configure search space
* @param pruner pruner to prune extra paths
* @param scorer scorer to estimate token probability
* @param activeListFactory factory for list of tokens
* @param showTokenCount show count of the tokens during decoding
* @param relativeWordBeamWidth relative pruning beam for lookahead
* @param growSkipInterval interval to skip growth step
* @param wantEntryPruning entry pruning
*/
public SimpleBreadthFirstSearchManager(Linguist linguist, Pruner pruner,
AcousticScorer scorer, ActiveListFactory activeListFactory,
boolean showTokenCount, double relativeWordBeamWidth,
int growSkipInterval, boolean wantEntryPruning) {
this.name = getClass().getName();
this.logger = Logger.getLogger(name);
this.logMath = LogMath.getLogMath();
this.linguist = linguist;
this.pruner = pruner;
this.scorer = scorer;
this.activeListFactory = activeListFactory;
this.showTokenCount = showTokenCount;
this.growSkipInterval = growSkipInterval;
this.wantEntryPruning = wantEntryPruning;
this.logRelativeWordBeamWidth = logMath.linearToLog(relativeWordBeamWidth);
this.keepAllTokens = true;
}
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
logMath = LogMath.getLogMath();
logger = ps.getLogger();
name = ps.getInstanceName();
linguist = (Linguist) ps.getComponent(PROP_LINGUIST);
pruner = (Pruner) ps.getComponent(PROP_PRUNER);
scorer = (AcousticScorer) ps.getComponent(PROP_SCORER);
activeListFactory = (ActiveListFactory) ps.getComponent(PROP_ACTIVE_LIST_FACTORY);
showTokenCount = ps.getBoolean(PROP_SHOW_TOKEN_COUNT);
double relativeWordBeamWidth = ps.getDouble(PROP_RELATIVE_WORD_BEAM_WIDTH);
growSkipInterval = ps.getInt(PROP_GROW_SKIP_INTERVAL);
wantEntryPruning = ps.getBoolean(PROP_WANT_ENTRY_PRUNING);
logRelativeWordBeamWidth = logMath.linearToLog(relativeWordBeamWidth);
this.keepAllTokens = true;
}
/** Called at the start of recognition. Gets the search manager ready to recognize */
public void startRecognition() {
logger.finer("starting recognition");
linguist.startRecognition();
pruner.startRecognition();
scorer.startRecognition();
localStart();
if (startTime == 0.0) {
startTime = System.currentTimeMillis();
}
}
/**
* Performs the recognition for the given number of frames.
*
* @param nFrames the number of frames to recognize
* @return the current result or null if there is no Result (due to the lack of frames to recognize)
*/
public Result recognize(int nFrames) {
boolean done = false;
Result result = null;
streamEnd = false;
for (int i = 0; i < nFrames && !done; i++) {
done = recognize();
}
// generate a new temporary result if the current token is based on a final search state
// remark: the first check for not null is necessary in cases that the search space does not contain scoreable tokens.
if (activeList.getBestToken() != null) {
// to make the current result as correct as possible we undo the last search graph expansion here
ActiveList fixedList = undoLastGrowStep();
// Now create the result using the fixed active-list.
if (!streamEnd)
result =
new Result(fixedList, resultList, currentFrameNumber, done, linguist.getSearchGraph().getWordTokenFirst(), false);
}
if (showTokenCount) {
showTokenCount();
}
return result;
}
/**
* Because the growBranches() is called although no data is left after the last speech frame, the ordering of the
* active-list might depend on the transition probabilities and (penalty-scores) only. Therefore we need to undo the last
* grow-step up to final states or the last emitting state in order to fix the list.
* @return newly created list
*/
protected ActiveList undoLastGrowStep() {
ActiveList fixedList = activeList.newInstance();
for (Token token : activeList) {
Token curToken = token.getPredecessor();
// remove the final states that are not the real final ones because they're just hide prior final tokens:
while (curToken.getPredecessor() != null && (
(curToken.isFinal() && curToken.getPredecessor() != null && !curToken.getPredecessor().isFinal())
|| (curToken.isEmitting() && curToken.getData() == null) // the so long not scored tokens
|| (!curToken.isFinal() && !curToken.isEmitting()))) {
curToken = curToken.getPredecessor();
}
fixedList.add(curToken);
}
return fixedList;
}
/** Terminates a recognition */
public void stopRecognition() {
localStop();
scorer.stopRecognition();
pruner.stopRecognition();
linguist.stopRecognition();
logger.finer("recognition stopped");
}
/**
* Performs recognition for one frame. Returns true if recognition has been completed.
*
* @return <code>true</code> if recognition is completed.
*/
protected boolean recognize() {
boolean more = scoreTokens(); // score emitting tokens
if (more) {
pruneBranches(); // eliminate poor branches
currentFrameNumber++;
if (growSkipInterval == 0
|| (currentFrameNumber % growSkipInterval) != 0) {
growBranches(); // extend remaining branches
}
}
return !more;
}
/** Gets the initial grammar node from the linguist and creates a GrammarNodeToken */
protected void localStart() {
currentFrameNumber = 0;
curTokensScored.value = 0;
ActiveList newActiveList = activeListFactory.newInstance();
SearchState state = linguist.getSearchGraph().getInitialState();
newActiveList.add(new Token(state, -1));
activeList = newActiveList;
growBranches();
}
/** Local cleanup for this search manager */
protected void localStop() {
}
/**
* Goes through the active list of tokens and expands each token, finding the set of successor tokens until all the
* successor tokens are emitting tokens.
*/
protected void growBranches() {
int mapSize = activeList.size() * 10;
if (mapSize == 0) {
mapSize = 1;
}
growTimer.start();
bestTokenMap = new HashMap<SearchState, Token>(mapSize);
ActiveList oldActiveList = activeList;
resultList = new LinkedList<Token>();
activeList = activeListFactory.newInstance();
threshold = oldActiveList.getBeamThreshold();
wordThreshold = oldActiveList.getBestScore() + logRelativeWordBeamWidth;
for (Token token : oldActiveList) {
collectSuccessorTokens(token);
}
growTimer.stop();
if (logger.isLoggable(Level.FINE)) {
int hmms = activeList.size();
totalHmms += hmms;
logger.fine("Frame: " + currentFrameNumber + " Hmms: "
+ hmms + " total " + totalHmms);
}
}
/**
* Calculate the acoustic scores for the active list. The active list should contain only emitting tokens.
*
* @return <code>true</code> if there are more frames to score, otherwise, false
*/
protected boolean scoreTokens() {
boolean hasMoreFrames = false;
scoreTimer.start();
Data data = scorer.calculateScores(activeList.getTokens());
scoreTimer.stop();
Token bestToken = null;
if (data instanceof Token) {
bestToken = (Token)data;
} else if (data == null) {
streamEnd = true;
}
if (bestToken != null) {
hasMoreFrames = true;
currentCollectTime = bestToken.getCollectTime();
activeList.setBestToken(bestToken);
}
// update statistics
curTokensScored.value += activeList.size();
totalTokensScored.value += activeList.size();
tokensPerSecond.value = totalTokensScored.value / getTotalTime();
// if (logger.isLoggable(Level.FINE)) {
// logger.fine(currentFrameNumber + " " + activeList.size()
// + " " + curTokensScored.value + " "
// + (int) tokensPerSecond.value);
// }
return hasMoreFrames;
}
/**
* Returns the total time since we start4ed
*
* @return the total time (in seconds)
*/
private double getTotalTime() {
return (System.currentTimeMillis() - startTime) / 1000.0;
}
/** Removes unpromising branches from the active list */
protected void pruneBranches() {
int startSize = activeList.size();
pruneTimer.start();
activeList = pruner.prune(activeList);
beamPruned.value += startSize - activeList.size();
pruneTimer.stop();
}
/**
* Gets the best token for this state
*
* @param state the state of interest
* @return the best token
*/
protected Token getBestToken(SearchState state) {
Token best = bestTokenMap.get(state);
if (logger.isLoggable(Level.FINER) && best != null) {
logger.finer("BT " + best + " for state " + state);
}
return best;
}
/**
* Sets the best token for a given state
*
* @param token the best token
* @param state the state
* @return the previous best token for the given state, or null if no previous best token
*/
protected Token setBestToken(Token token, SearchState state) {
return bestTokenMap.put(state, token);
}
public ActiveList getActiveList() {
return activeList;
}
/**
* Collects the next set of emitting tokens from a token and accumulates them in the active or result lists
*
* @param token the token to collect successors from
*/
protected void collectSuccessorTokens(Token token) {
SearchState state = token.getSearchState();
// If this is a final state, add it to the final list
if (token.isFinal()) {
resultList.add(token);
}
if (token.getScore() < threshold) {
return;
}
if (state instanceof WordSearchState
&& token.getScore() < wordThreshold) {
return;
}
SearchStateArc[] arcs = state.getSuccessors();
// For each successor
// calculate the entry score for the token based upon the
// predecessor token score and the transition probabilities
// if the score is better than the best score encountered for
// the SearchState and frame then create a new token, add
// it to the lattice and the SearchState.
// If the token is an emitting token add it to the list,
// otherwise recursively collect the new tokens successors.
for (SearchStateArc arc : arcs) {
SearchState nextState = arc.getState();
// We're actually multiplying the variables, but since
// these come in log(), multiply gets converted to add
float logEntryScore = token.getScore() + arc.getProbability();
if (wantEntryPruning) { // false by default
if (logEntryScore < threshold) {
continue;
}
if (nextState instanceof WordSearchState
&& logEntryScore < wordThreshold) {
continue;
}
}
Token predecessor = getResultListPredecessor(token);
// if not emitting, check to see if we've already visited
// this state during this frame. Expand the token only if we
// haven't visited it already. This prevents the search
// from getting stuck in a loop of states with no
// intervening emitting nodes. This can happen with nasty
// jsgf grammars such as ((foo*)*)*
if (!nextState.isEmitting()) {
Token newToken = new Token(predecessor, nextState, logEntryScore,
arc.getInsertionProbability(),
arc.getLanguageProbability(),
currentCollectTime);
tokensCreated.value++;
if (!isVisited(newToken)) {
collectSuccessorTokens(newToken);
}
continue;
}
Token bestToken = getBestToken(nextState);
if (bestToken == null) {
Token newToken = new Token(predecessor, nextState, logEntryScore,
arc.getInsertionProbability(),
arc.getLanguageProbability(),
currentFrameNumber);
tokensCreated.value++;
setBestToken(newToken, nextState);
activeList.add(newToken);
} else {
if (bestToken.getScore() <= logEntryScore) {
bestToken.update(predecessor, nextState, logEntryScore,
arc.getInsertionProbability(),
arc.getLanguageProbability(),
currentCollectTime);
viterbiPruned.value++;
} else {
viterbiPruned.value++;
}
}
}
}
/**
* Determines whether or not we've visited the state associated with this token since the previous frame.
*
* @param t the token to check
* @return true if we've visited the search state since the last frame
*/
private boolean isVisited(Token t) {
SearchState curState = t.getSearchState();
t = t.getPredecessor();
while (t != null && !t.isEmitting()) {
if (curState.equals(t.getSearchState())) {
return true;
}
t = t.getPredecessor();
}
return false;
}
/** Counts all the tokens in the active list (and displays them). This is an expensive operation. */
protected void showTokenCount() {
if (logger.isLoggable(Level.INFO)) {
Set<Token> tokenSet = new HashSet<Token>();
for (Token token : activeList) {
while (token != null) {
tokenSet.add(token);
token = token.getPredecessor();
}
}
logger.info("Token Lattice size: " + tokenSet.size());
tokenSet = new HashSet<Token>();
for (Token token : resultList) {
while (token != null) {
tokenSet.add(token);
token = token.getPredecessor();
}
}
logger.info("Result Lattice size: " + tokenSet.size());
}
}
/**
* Returns the best token map.
*
* @return the best token map
*/
protected Map<SearchState, Token> getBestTokenMap() {
return bestTokenMap;
}
/**
* Sets the best token Map.
*
* @param bestTokenMap the new best token Map
*/
protected void setBestTokenMap(Map<SearchState, Token> bestTokenMap) {
this.bestTokenMap = bestTokenMap;
}
/**
* Returns the result list.
*
* @return the result list
*/
public List<Token> getResultList() {
return resultList;
}
/**
* Returns the current frame number.
*
* @return the current frame number
*/
public int getCurrentFrameNumber() {
return currentFrameNumber;
}
/**
* Returns the Timer for growing.
*
* @return the Timer for growing
*/
public Timer getGrowTimer() {
return growTimer;
}
/**
* Returns the tokensCreated StatisticsVariable.
*
* @return the tokensCreated StatisticsVariable.
*/
public StatisticsVariable getTokensCreated() {
return tokensCreated;
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.SearchManager#allocate()
*/
public void allocate() {
totalTokensScored = StatisticsVariable
.getStatisticsVariable("totalTokensScored");
tokensPerSecond = StatisticsVariable
.getStatisticsVariable("tokensScoredPerSecond");
curTokensScored = StatisticsVariable
.getStatisticsVariable("curTokensScored");
tokensCreated = StatisticsVariable
.getStatisticsVariable("tokensCreated");
viterbiPruned = StatisticsVariable
.getStatisticsVariable("viterbiPruned");
beamPruned = StatisticsVariable.getStatisticsVariable("beamPruned");
try {
linguist.allocate();
pruner.allocate();
scorer.allocate();
} catch (IOException e) {
throw new RuntimeException("Allocation of search manager resources failed", e);
}
scoreTimer = TimerPool.getTimer(this, "Score");
pruneTimer = TimerPool.getTimer(this, "Prune");
growTimer = TimerPool.getTimer(this, "Grow");
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.SearchManager#deallocate()
*/
public void deallocate() {
try {
scorer.deallocate();
pruner.deallocate();
linguist.deallocate();
} catch (IOException e) {
throw new RuntimeException("Deallocation of search manager resources failed", e);
}
}
@Override
public String toString() {
return name;
}
}

View file

@ -1,207 +0,0 @@
/*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
/**
* @author plamere
*/
public class SortingActiveListFactory extends ActiveListFactory {
/**
* @param absoluteBeamWidth absolute pruning beam
* @param relativeBeamWidth relative pruning beam
*/
public SortingActiveListFactory(int absoluteBeamWidth,
double relativeBeamWidth)
{
super(absoluteBeamWidth, relativeBeamWidth);
}
public SortingActiveListFactory() {
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.ActiveListFactory#newInstance()
*/
@Override
public ActiveList newInstance() {
return new SortingActiveList(absoluteBeamWidth, logRelativeBeamWidth);
}
/**
* An active list that tries to be simple and correct. This type of active list will be slow, but should exhibit
* correct behavior. Faster versions of the ActiveList exist (HeapActiveList, TreeActiveList).
* <p>
* This class is not thread safe and should only be used by a single thread.
* <p>
* Note that all scores are maintained in the LogMath log base.
*/
class SortingActiveList implements ActiveList {
private final static int DEFAULT_SIZE = 1000;
private final int absoluteBeamWidth;
private final float logRelativeBeamWidth;
private Token bestToken;
// when the list is changed these things should be
// changed/updated as well
private List<Token> tokenList;
/**
* Creates an empty active list
*
* @param absoluteBeamWidth beam for absolute pruning
* @param logRelativeBeamWidth beam for relative pruning
*/
public SortingActiveList(int absoluteBeamWidth, float logRelativeBeamWidth) {
this.absoluteBeamWidth = absoluteBeamWidth;
this.logRelativeBeamWidth = logRelativeBeamWidth;
int initListSize = absoluteBeamWidth > 0 ? absoluteBeamWidth : DEFAULT_SIZE;
this.tokenList = new ArrayList<Token>(initListSize);
}
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token) {
tokenList.add(token);
if (bestToken == null || token.getScore() > bestToken.getScore()) {
bestToken = token;
}
}
/**
* Purges excess members. Reduce the size of the token list to the absoluteBeamWidth
*
* @return a (possible new) active list
*/
public ActiveList purge() {
// if the absolute beam is zero, this means there
// should be no constraint on the abs beam size at all
// so we will only be relative beam pruning, which means
// that we don't have to sort the list
if (absoluteBeamWidth > 0 && tokenList.size() > absoluteBeamWidth) {
Collections.sort(tokenList, Scoreable.COMPARATOR);
tokenList = tokenList.subList(0, absoluteBeamWidth);
}
return this;
}
/**
* gets the beam threshold best upon the best scoring token
*
* @return the beam threshold
*/
public float getBeamThreshold() {
return getBestScore() + logRelativeBeamWidth;
}
/**
* gets the best score in the list
*
* @return the best score
*/
public float getBestScore() {
float bestScore = -Float.MAX_VALUE;
if (bestToken != null) {
bestScore = bestToken.getScore();
}
return bestScore;
}
/**
* Sets the best scoring token for this active list
*
* @param token the best scoring token
*/
public void setBestToken(Token token) {
bestToken = token;
}
/**
* Gets the best scoring token for this active list
*
* @return the best scoring token
*/
public Token getBestToken() {
return bestToken;
}
/**
* Retrieves the iterator for this tree.
*
* @return the iterator for this token list
*/
public Iterator<Token> iterator() {
return tokenList.iterator();
}
/**
* Gets the list of all tokens
*
* @return the list of tokens
*/
public List<Token> getTokens() {
return tokenList;
}
/**
* Returns the number of tokens on this active list
*
* @return the size of the active list
*/
public final int size() {
return tokenList.size();
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.search.ActiveList#newInstance()
*/
public ActiveList newInstance() {
return SortingActiveListFactory.this.newInstance();
}
}
}

View file

@ -1,477 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import edu.cmu.sphinx.decoder.scorer.ScoreProvider;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.frontend.FloatData;
import edu.cmu.sphinx.linguist.HMMSearchState;
import edu.cmu.sphinx.linguist.SearchState;
import edu.cmu.sphinx.linguist.UnitSearchState;
import edu.cmu.sphinx.linguist.WordSearchState;
import edu.cmu.sphinx.linguist.acoustic.Unit;
import edu.cmu.sphinx.linguist.dictionary.Pronunciation;
import edu.cmu.sphinx.linguist.dictionary.Word;
import java.text.DecimalFormat;
import java.util.*;
/**
* Represents a single state in the recognition trellis. Subclasses of a token are used to represent the various
* emitting state.
* <p>
* All scores are maintained in LogMath log base
*/
public class Token implements Scoreable {
private static int curCount;
private static int lastCount;
private static final DecimalFormat scoreFmt = new DecimalFormat("0.0000000E00");
private static final DecimalFormat numFmt = new DecimalFormat("0000");
private Token predecessor;
private float logLanguageScore;
private float logTotalScore;
private float logInsertionScore;
private float logAcousticScore;
private SearchState searchState;
private long collectTime;
private Data data;
/**
* Internal constructor for a token. Used by classes Token, CombineToken, ParallelToken
*
* @param predecessor the predecessor for this token
* @param state the SentenceHMMState associated with this token
* @param logTotalScore the total entry score for this token (in LogMath log base)
* @param logInsertionScore the insertion score associated with this token (in LogMath log base)
* @param logLanguageScore the language score associated with this token (in LogMath log base)
* @param collectTime the frame collection time
*/
public Token(Token predecessor,
SearchState state,
float logTotalScore,
float logInsertionScore,
float logLanguageScore,
long collectTime) {
this.predecessor = predecessor;
this.searchState = state;
this.logTotalScore = logTotalScore;
this.logInsertionScore = logInsertionScore;
this.logLanguageScore = logLanguageScore;
this.collectTime = collectTime;
curCount++;
}
/**
* Creates the initial token with the given word history depth
*
* @param state the SearchState associated with this token
* @param collectTime collection time of this token
*/
public Token(SearchState state, long collectTime) {
this(null, state, 0.0f, 0.0f, 0.0f, collectTime);
}
/**
* Creates a Token with the given acoustic and language scores and predecessor.
*
* @param predecessor previous token
* @param logTotalScore total score
* @param logAcousticScore the log acoustic score
* @param logInsertionScore the log insertion score
* @param logLanguageScore the log language score
*/
public Token(Token predecessor,
float logTotalScore,
float logAcousticScore,
float logInsertionScore,
float logLanguageScore) {
this(predecessor, null, logTotalScore, logInsertionScore, logLanguageScore, 0);
this.logAcousticScore = logAcousticScore;
}
/**
* Returns the predecessor for this token, or null if this token has no predecessors
*
* @return the predecessor
*/
public Token getPredecessor() {
return predecessor;
}
/**
* Collect time is different from frame number because some frames might be skipped in silence detector
*
* @return collection time in milliseconds
*/
public long getCollectTime() {
return collectTime;
}
/** Sets the feature for this Token.
* @param data features
*/
public void setData(Data data) {
this.data = data;
if (data instanceof FloatData) {
collectTime = ((FloatData)data).getCollectTime();
}
}
/**
* Returns the feature for this Token.
*
* @return the feature for this Token
*/
public Data getData() {
return data;
}
/**
* Returns the score for the token. The score is a combination of language and acoustic scores
*
* @return the score of this frame (in logMath log base)
*/
public float getScore() {
return logTotalScore;
}
/**
* Calculates a score against the given feature. The score can be retrieved
* with get score. The token will keep a reference to the scored feature-vector.
*
* @param feature the feature to be scored
* @return the score for the feature
*/
public float calculateScore(Data feature) {
logAcousticScore = ((ScoreProvider) searchState).getScore(feature);
logTotalScore += logAcousticScore;
setData(feature);
return logTotalScore;
}
public float[] calculateComponentScore(Data feature){
return ((ScoreProvider) searchState).getComponentScore(feature);
}
/**
* Normalizes a previously calculated score
*
* @param maxLogScore the score to normalize this score with
* @return the normalized score
*/
public float normalizeScore(float maxLogScore) {
logTotalScore -= maxLogScore;
logAcousticScore -= maxLogScore;
return logTotalScore;
}
/**
* Sets the score for this token
*
* @param logScore the new score for the token (in logMath log base)
*/
public void setScore(float logScore) {
this.logTotalScore = logScore;
}
/**
* Returns the language score associated with this token
*
* @return the language score (in logMath log base)
*/
public float getLanguageScore() {
return logLanguageScore;
}
/**
* Returns the insertion score associated with this token.
* Insertion score is the score of the transition between
* states. It might be transition score from the acoustic model,
* phone insertion score or word insertion probability from
* the linguist.
*
* @return the language score (in logMath log base)
*/
public float getInsertionScore() {
return logInsertionScore;
}
/**
* Returns the acoustic score for this token (in logMath log base).
* Acoustic score is a sum of frame GMM.
*
* @return score
*/
public float getAcousticScore() {
return logAcousticScore;
}
/**
* Returns the SearchState associated with this token
*
* @return the searchState
*/
public SearchState getSearchState() {
return searchState;
}
/**
* Determines if this token is associated with an emitting state. An emitting state is a state that can be scored
* acoustically.
*
* @return <code>true</code> if this token is associated with an emitting state
*/
public boolean isEmitting() {
return searchState.isEmitting();
}
/**
* Determines if this token is associated with a final SentenceHMM state.
*
* @return <code>true</code> if this token is associated with a final state
*/
public boolean isFinal() {
return searchState.isFinal();
}
/**
* Determines if this token marks the end of a word
*
* @return <code>true</code> if this token marks the end of a word
*/
public boolean isWord() {
return searchState instanceof WordSearchState;
}
/**
* Retrieves the string representation of this object
*
* @return the string representation of this object
*/
@Override
public String toString() {
return
numFmt.format(getCollectTime()) + ' ' +
scoreFmt.format(getScore()) + ' ' +
scoreFmt.format(getAcousticScore()) + ' ' +
scoreFmt.format(getLanguageScore()) + ' ' +
getSearchState();
}
/** dumps a branch of tokens */
public void dumpTokenPath() {
dumpTokenPath(true);
}
/**
* dumps a branch of tokens
*
* @param includeHMMStates if true include all sentence hmm states
*/
public void dumpTokenPath(boolean includeHMMStates) {
Token token = this;
List<Token> list = new ArrayList<Token>();
while (token != null) {
list.add(token);
token = token.getPredecessor();
}
for (int i = list.size() - 1; i >= 0; i--) {
token = list.get(i);
if (includeHMMStates ||
(!(token.getSearchState() instanceof HMMSearchState))) {
System.out.println(" " + token);
}
}
System.out.println();
}
/**
* Returns the string of words leading up to this token.
*
* @param wantFiller if true, filler words are added
* @param wantPronunciations if true append [ phoneme phoneme ... ] after each word
* @return the word path
*/
public String getWordPath(boolean wantFiller, boolean wantPronunciations) {
StringBuilder sb = new StringBuilder();
Token token = this;
while (token != null) {
if (token.isWord()) {
WordSearchState wordState =
(WordSearchState) token.getSearchState();
Pronunciation pron = wordState.getPronunciation();
Word word = wordState.getPronunciation().getWord();
// System.out.println(token.getFrameNumber() + " " + word + " " + token.logLanguageScore + " " + token.logAcousticScore);
if (wantFiller || !word.isFiller()) {
if (wantPronunciations) {
sb.insert(0, ']');
Unit[] u = pron.getUnits();
for (int i = u.length - 1; i >= 0; i--) {
if (i < u.length - 1) sb.insert(0, ',');
sb.insert(0, u[i].getName());
}
sb.insert(0, '[');
}
sb.insert(0, word.getSpelling());
sb.insert(0, ' ');
}
}
token = token.getPredecessor();
}
return sb.toString().trim();
}
/**
* Returns the string of words for this token, with no embedded filler words
*
* @return the string of words
*/
public String getWordPathNoFiller() {
return getWordPath(false, false);
}
/**
* Returns the string of words for this token, with embedded silences
*
* @return the string of words
*/
public String getWordPath() {
return getWordPath(true, false);
}
/**
* Returns the string of words and units for this token, with embedded silences.
*
* @return the string of words and units
*/
public String getWordUnitPath() {
StringBuilder sb = new StringBuilder();
Token token = this;
while (token != null) {
SearchState searchState = token.getSearchState();
if (searchState instanceof WordSearchState) {
WordSearchState wordState = (WordSearchState) searchState;
Word word = wordState.getPronunciation().getWord();
sb.insert(0, ' ' + word.getSpelling());
} else if (searchState instanceof UnitSearchState) {
UnitSearchState unitState = (UnitSearchState) searchState;
Unit unit = unitState.getUnit();
sb.insert(0, ' ' + unit.getName());
}
token = token.getPredecessor();
}
return sb.toString().trim();
}
/**
* Returns the word of this Token, the search state is a WordSearchState. If the search state is not a
* WordSearchState, return null.
*
* @return the word of this Token, or null if this is not a word token
*/
public Word getWord() {
if (isWord()) {
WordSearchState wordState = (WordSearchState) searchState;
return wordState.getPronunciation().getWord();
} else {
return null;
}
}
/** Shows the token count */
public static void showCount() {
System.out.println("Cur count: " + curCount + " new " +
(curCount - lastCount));
lastCount = curCount;
}
/**
* Determines if this branch is valid
*
* @return true if the token and its predecessors are valid
*/
public boolean validate() {
return true;
}
/**
* Return the DecimalFormat object for formatting the print out of scores.
*
* @return the DecimalFormat object for formatting score print outs
*/
protected static DecimalFormat getScoreFormat() {
return scoreFmt;
}
/**
* Return the DecimalFormat object for formatting the print out of numbers
*
* @return the DecimalFormat object for formatting number print outs
*/
protected static DecimalFormat getNumberFormat() {
return numFmt;
}
public void update(Token predecessor, SearchState nextState,
float logEntryScore, float insertionProbability,
float languageProbability, long collectTime) {
this.predecessor = predecessor;
this.searchState = nextState;
this.logTotalScore = logEntryScore;
this.logInsertionScore = insertionProbability;
this.logLanguageScore = languageProbability;
this.collectTime = collectTime;
}
}

View file

@ -1,172 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import edu.cmu.sphinx.linguist.SearchState;
/**
* The token heap search manager that maintains the heap of best tokens for each
* search state instead of single one best token
*
*/
public class TokenHeapSearchManager extends WordPruningBreadthFirstSearchManager {
protected final int maxTokenHeapSize = 3;
Map<Object, TokenHeap> bestTokenMap;
@Override
protected void createBestTokenMap() {
int mapSize = activeList.size() << 2;
if (mapSize == 0) {
mapSize = 1;
}
bestTokenMap = new HashMap<Object, TokenHeap>(mapSize, 0.3F);
}
@Override
protected void setBestToken(Token token, SearchState state) {
TokenHeap th = bestTokenMap.get(state);
if (th == null) {
th = new TokenHeap(maxTokenHeapSize);
bestTokenMap.put(state, th);
}
th.add(token);
}
@Override
protected Token getBestToken(SearchState state) {
// new way... if the heap for this state isn't full return
// null, otherwise return the worst scoring token
TokenHeap th = bestTokenMap.get(state);
Token t;
if (th == null) {
return null;
} else if ((t = th.get(state)) != null) {
return t;
} else if (!th.isFull()) {
return null;
} else {
return th.getSmallest();
}
}
/**
* A quick and dirty token heap that allows us to perform token stack
* experiments. It is not very efficient. We will likely replace this with
* something better once we figure out how we want to prune things.
*/
class TokenHeap {
final Token[] tokens;
int curSize;
/**
* Creates a token heap with the maximum size
*
* @param maxSize
* the maximum size of the heap
*/
TokenHeap(int maxSize) {
tokens = new Token[maxSize];
}
/**
* Adds a token to the heap
*
* @param token
* the token to add
*/
void add(Token token) {
// first, if an identical state exists, replace
// it.
if (!tryReplace(token)) {
if (curSize < tokens.length) {
tokens[curSize++] = token;
} else if (token.getScore() > tokens[curSize - 1].getScore()) {
tokens[curSize - 1] = token;
}
}
fixupInsert();
}
/**
* Returns the smallest scoring token on the heap
*
* @return the smallest scoring token
*/
Token getSmallest() {
if (curSize == 0) {
return null;
} else {
return tokens[curSize - 1];
}
}
/**
* Determines if the heap is full
*
* @return <code>true</code> if the heap is full
*/
boolean isFull() {
return curSize == tokens.length;
}
/**
* Checks to see if there is already a token t on the heap that has the
* same search state. If so, this token replaces that one
*
* @param t
* the token to try to add to the heap
* @return <code>true</code> if the token was added
*/
private boolean tryReplace(Token t) {
for (int i = 0; i < curSize; i++) {
if (t.getSearchState().equals(tokens[i].getSearchState())) {
assert t.getScore() > tokens[i].getScore();
tokens[i] = t;
return true;
}
}
return false;
}
/** Orders the heap after an insert */
private void fixupInsert() {
Arrays.sort(tokens, 0, curSize - 1, Scoreable.COMPARATOR);
}
/**
* returns a token on the heap that matches the given search state
*
* @param s
* the search state
* @return the token that matches, or null
*/
Token get(SearchState s) {
for (int i = 0; i < curSize; i++) {
if (tokens[i].getSearchState().equals(s)) {
return tokens[i];
}
}
return null;
}
}
}

View file

@ -1,86 +0,0 @@
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Boolean;
abstract public class TokenSearchManager implements SearchManager {
/** The property that specifies whether to build a word lattice. */
@S4Boolean(defaultValue = true)
public final static String PROP_BUILD_WORD_LATTICE = "buildWordLattice";
/**
* The property that controls whether or not we keep all tokens. If this is
* set to false, only word tokens are retained, otherwise all tokens are
* retained.
*/
@S4Boolean(defaultValue = false)
public final static String PROP_KEEP_ALL_TOKENS = "keepAllTokens";
protected boolean buildWordLattice;
protected boolean keepAllTokens;
/*
* (non-Javadoc)
*
* @see
* edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util
* .props.PropertySheet)
*/
public void newProperties(PropertySheet ps) throws PropertyException {
buildWordLattice = ps.getBoolean(PROP_BUILD_WORD_LATTICE);
keepAllTokens = ps.getBoolean(PROP_KEEP_ALL_TOKENS);
}
/**
* Find the token to use as a predecessor in resultList given a candidate
* predecessor. There are three cases here:
*
* <ul>
* <li>We want to store everything in resultList. In that case
* {@link #keepAllTokens} is set to true and we just store everything that
* was built before.
* <li>We are only interested in sequence of words. In this case we just
* keep word tokens and ignore everything else. In this case timing and
* scoring information is lost since we keep scores in emitting tokens.
* <li>We want to keep words but we want to keep scores to build a lattice
* from the result list later and {@link #buildWordLattice} is set to true.
* In this case we want to insert intermediate token to store the score and
* this token will be used during lattice path collapse to get score on
* edge. See {@link edu.cmu.sphinx.result.Lattice} for details of resultList
* compression.
* </ul>
*
* @param token
* the token of interest
* @return the immediate successor word token
*/
protected Token getResultListPredecessor(Token token) {
if (keepAllTokens) {
return token;
}
if(!buildWordLattice) {
if (token.isWord())
return token;
else
return token.getPredecessor();
}
float logAcousticScore = 0.0f;
float logLanguageScore = 0.0f;
float logInsertionScore = 0.0f;
while (token != null && !token.isWord()) {
logAcousticScore += token.getAcousticScore();
logLanguageScore += token.getLanguageScore();
logInsertionScore += token.getInsertionScore();
token = token.getPredecessor();
}
return new Token(token, token.getScore(), logInsertionScore, logAcousticScore, logLanguageScore);
}
}

View file

@ -1,259 +0,0 @@
/*
*
* Copyright 1999-2004 Carnegie Mellon University.
* Portions Copyright 2004 Sun Microsystems, Inc.
* Portions Copyright 2004 Mitsubishi Electronic Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
import edu.cmu.sphinx.decoder.scorer.Scoreable;
import edu.cmu.sphinx.linguist.WordSearchState;
import edu.cmu.sphinx.linguist.dictionary.Word;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Integer;
import java.util.*;
/**
* A factory for WordActiveList. The word active list is active list designed to hold word tokens only. In addition to
* the usual active list properties such as absolute and relative beams, the word active list allows restricting the
* number of copies of any particular word in the word beam. Also the word active list can restrict the number of
* fillers in the beam.
*/
public class WordActiveListFactory extends ActiveListFactory {
/** property that sets the max paths for a single word. (zero disables this feature) */
@S4Integer(defaultValue = 0)
public final static String PROP_MAX_PATHS_PER_WORD = "maxPathsPerWord";
/** property that sets the max filler words allowed in the beam. (zero disables this feature) */
@S4Integer(defaultValue = 1)
public final static String PROP_MAX_FILLER_WORDS = "maxFillerWords";
private int maxPathsPerWord;
private int maxFiller;
/**
* Create factory for word active list
* @param absoluteBeamWidth beam for absolute pruning
* @param relativeBeamWidth beam for relative pruning
* @param maxPathsPerWord maximum number of path to keep per word
* @param maxFiller maximum number of fillers
*/
public WordActiveListFactory(int absoluteBeamWidth,
double relativeBeamWidth, int maxPathsPerWord, int maxFiller )
{
super(absoluteBeamWidth, relativeBeamWidth);
this.maxPathsPerWord = maxPathsPerWord;
this.maxFiller = maxFiller;
}
public WordActiveListFactory() {
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util.props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
maxPathsPerWord = ps.getInt(PROP_MAX_PATHS_PER_WORD);
maxFiller = ps.getInt(PROP_MAX_FILLER_WORDS);
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.ActiveListFactory#newInstance()
*/
@Override
public ActiveList newInstance() {
return new WordActiveList();
}
/**
* An active list that manages words. Guarantees only one version of a word.
* <p>
* <p>
* Note that all scores are maintained in the LogMath log domain
*/
class WordActiveList implements ActiveList {
private Token bestToken;
private List<Token> tokenList = new LinkedList<Token>();
/**
* Adds the given token to the list
*
* @param token the token to add
*/
public void add(Token token) {
tokenList.add(token);
if (bestToken == null || token.getScore() > bestToken.getScore()) {
bestToken = token;
}
}
/**
* Replaces an old token with a new token
*
* @param oldToken the token to replace (or null in which case, replace works like add).
* @param newToken the new token to be placed in the list.
*/
public void replace(Token oldToken, Token newToken) {
add(newToken);
if (oldToken != null) {
tokenList.remove(oldToken);
}
}
/**
* Purges excess members. Remove all nodes that fall below the relativeBeamWidth
*
* @return a (possible new) active list
*/
public ActiveList purge() {
int fillerCount = 0;
Map<Word, Integer> countMap = new HashMap<Word, Integer>();
Collections.sort(tokenList, Scoreable.COMPARATOR);
// remove word duplicates
for (Iterator<Token> i = tokenList.iterator(); i.hasNext();) {
Token token = i.next();
WordSearchState wordState = (WordSearchState)token.getSearchState();
Word word = wordState.getPronunciation().getWord();
// only allow maxFiller words
if (maxFiller > 0) {
if (word.isFiller()) {
if (fillerCount < maxFiller) {
fillerCount++;
} else {
i.remove();
continue;
}
}
}
if (maxPathsPerWord > 0) {
Integer count = countMap.get(word);
int c = count == null ? 0 : count;
// Since the tokens are sorted by score we only
// keep the n tokens for a particular word
if (c < maxPathsPerWord - 1) {
countMap.put(word, c + 1);
} else {
i.remove();
}
}
}
if (tokenList.size() > absoluteBeamWidth) {
tokenList = tokenList.subList(0, absoluteBeamWidth);
}
return this;
}
/**
* Retrieves the iterator for this tree.
*
* @return the iterator for this token list
*/
public Iterator<Token> iterator() {
return tokenList.iterator();
}
/**
* Gets the set of all tokens
*
* @return the set of tokens
*/
public List<Token> getTokens() {
return tokenList;
}
/**
* Returns the number of tokens on this active list
*
* @return the size of the active list
*/
public final int size() {
return tokenList.size();
}
/**
* gets the beam threshold best upon the best scoring token
*
* @return the beam threshold
*/
public float getBeamThreshold() {
return getBestScore() + logRelativeBeamWidth;
}
/**
* gets the best score in the list
*
* @return the best score
*/
public float getBestScore() {
float bestScore = -Float.MAX_VALUE;
if (bestToken != null) {
bestScore = bestToken.getScore();
}
return bestScore;
}
/**
* Sets the best scoring token for this active list
*
* @param token the best scoring token
*/
public void setBestToken(Token token) {
bestToken = token;
}
/**
* Gets the best scoring token for this active list
*
* @return the best scoring token
*/
public Token getBestToken() {
return bestToken;
}
/* (non-Javadoc)
* @see edu.cmu.sphinx.decoder.search.ActiveList#createNew()
*/
public ActiveList newInstance() {
return WordActiveListFactory.this.newInstance();
}
}
}

View file

@ -1,497 +0,0 @@
/*
* Copyright 2014 Carnegie Mellon University.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
// a test search manager.
import java.util.Arrays;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
import edu.cmu.sphinx.decoder.pruner.Pruner;
import edu.cmu.sphinx.decoder.scorer.AcousticScorer;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.linguist.Linguist;
import edu.cmu.sphinx.linguist.SearchState;
import edu.cmu.sphinx.linguist.SearchStateArc;
import edu.cmu.sphinx.linguist.WordSearchState;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Loader;
import edu.cmu.sphinx.linguist.acoustic.tiedstate.Sphinx3Loader;
import edu.cmu.sphinx.linguist.allphone.PhoneHmmSearchState;
import edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.LexTreeHMMState;
import edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.LexTreeNonEmittingHMMState;
import edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.LexTreeWordState;
import edu.cmu.sphinx.linguist.lextree.LexTreeLinguist.LexTreeEndUnitState;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.props.PropertyException;
import edu.cmu.sphinx.util.props.PropertySheet;
import edu.cmu.sphinx.util.props.S4Component;
import edu.cmu.sphinx.util.props.S4Double;
import edu.cmu.sphinx.util.props.S4Integer;
/**
* Provides the breadth first search with fast match heuristic included to
* reduce amount of tokens created.
* <p>
* All scores and probabilities are maintained in the log math log domain.
*/
public class WordPruningBreadthFirstLookaheadSearchManager extends WordPruningBreadthFirstSearchManager {
/** The property that to get direct access to gau for score caching control. */
@S4Component(type = Loader.class)
public final static String PROP_LOADER = "loader";
/**
* The property that defines the name of the linguist to be used for fast
* match.
*/
@S4Component(type = Linguist.class)
public final static String PROP_FASTMATCH_LINGUIST = "fastmatchLinguist";
@S4Component(type = ActiveListFactory.class)
/** The property that defines the type active list factory for fast match */
public final static String PROP_FM_ACTIVE_LIST_FACTORY = "fastmatchActiveListFactory";
@S4Double(defaultValue = 1.0)
public final static String PROP_LOOKAHEAD_PENALTY_WEIGHT = "lookaheadPenaltyWeight";
/**
* The property that controls size of lookahead window. Acceptable values
* are in range [1..10].
*/
@S4Integer(defaultValue = 5)
public final static String PROP_LOOKAHEAD_WINDOW = "lookaheadWindow";
// -----------------------------------
// Configured Subcomponents
// -----------------------------------
private Linguist fastmatchLinguist; // Provides phones info for fastmatch
private Loader loader;
private ActiveListFactory fastmatchActiveListFactory;
// -----------------------------------
// Lookahead data
// -----------------------------------
private int lookaheadWindow;
private float lookaheadWeight;
private HashMap<Integer, Float> penalties;
private LinkedList<FrameCiScores> ciScores;
// -----------------------------------
// Working data
// -----------------------------------
private int currentFastMatchFrameNumber; // the current frame number for
// lookahead matching
protected ActiveList fastmatchActiveList; // the list of active tokens for
// fast match
protected Map<SearchState, Token> fastMatchBestTokenMap;
private boolean fastmatchStreamEnd;
/**
* Creates a pruning manager with lookahead
* @param linguist a linguist for search space
* @param fastmatchLinguist a linguist for fast search space
* @param pruner pruner to drop tokens
* @param loader model loader
* @param scorer scorer to estimate token probability
* @param activeListManager active list manager to store tokens
* @param fastmatchActiveListFactory fast match active list factor to store phoneloop tokens
* @param showTokenCount show count during decoding
* @param relativeWordBeamWidth relative beam for lookahead pruning
* @param growSkipInterval skip interval for grown
* @param checkStateOrder check order of states during growth
* @param buildWordLattice build a lattice during decoding
* @param maxLatticeEdges max edges to keep in lattice
* @param acousticLookaheadFrames frames to do lookahead
* @param keepAllTokens keep tokens including emitting tokens
* @param lookaheadWindow window for lookahead
* @param lookaheadWeight weight for lookahead pruning
*/
public WordPruningBreadthFirstLookaheadSearchManager(Linguist linguist, Linguist fastmatchLinguist, Loader loader,
Pruner pruner, AcousticScorer scorer, ActiveListManager activeListManager,
ActiveListFactory fastmatchActiveListFactory, boolean showTokenCount, double relativeWordBeamWidth,
int growSkipInterval, boolean checkStateOrder, boolean buildWordLattice, int lookaheadWindow, float lookaheadWeight,
int maxLatticeEdges, float acousticLookaheadFrames, boolean keepAllTokens) {
super(linguist, pruner, scorer, activeListManager, showTokenCount, relativeWordBeamWidth, growSkipInterval,
checkStateOrder, buildWordLattice, maxLatticeEdges, acousticLookaheadFrames, keepAllTokens);
this.loader = loader;
this.fastmatchLinguist = fastmatchLinguist;
this.fastmatchActiveListFactory = fastmatchActiveListFactory;
this.lookaheadWindow = lookaheadWindow;
this.lookaheadWeight = lookaheadWeight;
if (lookaheadWindow < 1 || lookaheadWindow > 10)
throw new IllegalArgumentException("Unsupported lookahead window size: " + lookaheadWindow
+ ". Value in range [1..10] is expected");
this.ciScores = new LinkedList<FrameCiScores>();
this.penalties = new HashMap<Integer, Float>();
if (loader instanceof Sphinx3Loader && ((Sphinx3Loader) loader).hasTiedMixtures())
((Sphinx3Loader) loader).setGauScoresQueueLength(lookaheadWindow + 2);
}
public WordPruningBreadthFirstLookaheadSearchManager() {
}
/*
* (non-Javadoc)
*
* @see
* edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util
* .props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
fastmatchLinguist = (Linguist) ps.getComponent(PROP_FASTMATCH_LINGUIST);
fastmatchActiveListFactory = (ActiveListFactory) ps.getComponent(PROP_FM_ACTIVE_LIST_FACTORY);
loader = (Loader) ps.getComponent(PROP_LOADER);
lookaheadWindow = ps.getInt(PROP_LOOKAHEAD_WINDOW);
lookaheadWeight = ps.getFloat(PROP_LOOKAHEAD_PENALTY_WEIGHT);
if (lookaheadWindow < 1 || lookaheadWindow > 10)
throw new PropertyException(WordPruningBreadthFirstLookaheadSearchManager.class.getName(), PROP_LOOKAHEAD_WINDOW,
"Unsupported lookahead window size: " + lookaheadWindow + ". Value in range [1..10] is expected");
ciScores = new LinkedList<FrameCiScores>();
penalties = new HashMap<Integer, Float>();
if (loader instanceof Sphinx3Loader && ((Sphinx3Loader) loader).hasTiedMixtures())
((Sphinx3Loader) loader).setGauScoresQueueLength(lookaheadWindow + 2);
}
/**
* Performs the recognition for the given number of frames.
*
* @param nFrames
* the number of frames to recognize
* @return the current result
*/
@Override
public Result recognize(int nFrames) {
boolean done = false;
Result result = null;
streamEnd = false;
for (int i = 0; i < nFrames && !done; i++) {
if (!fastmatchStreamEnd)
fastMatchRecognize();
penalties.clear();
ciScores.poll();
done = recognize();
}
if (!streamEnd) {
result = new Result(loserManager, activeList, resultList, currentCollectTime, done, linguist.getSearchGraph()
.getWordTokenFirst(), true);
}
// tokenTypeTracker.show();
if (showTokenCount) {
showTokenCount();
}
return result;
}
private void fastMatchRecognize() {
boolean more = scoreFastMatchTokens();
if (more) {
pruneFastMatchBranches();
currentFastMatchFrameNumber++;
createFastMatchBestTokenMap();
growFastmatchBranches();
}
}
/**
* creates a new best token map with the best size
*/
protected void createFastMatchBestTokenMap() {
int mapSize = fastmatchActiveList.size() * 10;
if (mapSize == 0) {
mapSize = 1;
}
fastMatchBestTokenMap = new HashMap<SearchState, Token>(mapSize);
}
/**
* Gets the initial grammar node from the linguist and creates a
* GrammarNodeToken
*/
@Override
protected void localStart() {
currentFastMatchFrameNumber = 0;
if (loader instanceof Sphinx3Loader && ((Sphinx3Loader) loader).hasTiedMixtures())
((Sphinx3Loader) loader).clearGauScores();
// prepare fast match active list
fastmatchActiveList = fastmatchActiveListFactory.newInstance();
SearchState fmInitState = fastmatchLinguist.getSearchGraph().getInitialState();
fastmatchActiveList.add(new Token(fmInitState, currentFastMatchFrameNumber));
createFastMatchBestTokenMap();
growFastmatchBranches();
fastmatchStreamEnd = false;
for (int i = 0; (i < lookaheadWindow - 1) && !fastmatchStreamEnd; i++)
fastMatchRecognize();
super.localStart();
}
/**
* Goes through the fast match active list of tokens and expands each token,
* finding the set of successor tokens until all the successor tokens are
* emitting tokens.
*/
protected void growFastmatchBranches() {
growTimer.start();
ActiveList oldActiveList = fastmatchActiveList;
fastmatchActiveList = fastmatchActiveListFactory.newInstance();
float fastmathThreshold = oldActiveList.getBeamThreshold();
// TODO more precise range of baseIds, remove magic number
float[] frameCiScores = new float[100];
Arrays.fill(frameCiScores, -Float.MAX_VALUE);
float frameMaxCiScore = -Float.MAX_VALUE;
for (Token token : oldActiveList) {
float tokenScore = token.getScore();
if (tokenScore < fastmathThreshold)
continue;
// filling max ci scores array that will be used in general search
// token score composing
if (token.getSearchState() instanceof PhoneHmmSearchState) {
int baseId = ((PhoneHmmSearchState) token.getSearchState()).getBaseId();
if (frameCiScores[baseId] < tokenScore)
frameCiScores[baseId] = tokenScore;
if (frameMaxCiScore < tokenScore)
frameMaxCiScore = tokenScore;
}
collectFastMatchSuccessorTokens(token);
}
ciScores.add(new FrameCiScores(frameCiScores, frameMaxCiScore));
growTimer.stop();
}
protected boolean scoreFastMatchTokens() {
boolean moreTokens;
scoreTimer.start();
Data data = scorer.calculateScoresAndStoreData(fastmatchActiveList.getTokens());
scoreTimer.stop();
Token bestToken = null;
if (data instanceof Token) {
bestToken = (Token) data;
} else {
fastmatchStreamEnd = true;
}
moreTokens = (bestToken != null);
fastmatchActiveList.setBestToken(bestToken);
// monitorWords(activeList);
monitorStates(fastmatchActiveList);
// System.out.println("BEST " + bestToken);
curTokensScored.value += fastmatchActiveList.size();
totalTokensScored.value += fastmatchActiveList.size();
return moreTokens;
}
/** Removes unpromising branches from the fast match active list */
protected void pruneFastMatchBranches() {
pruneTimer.start();
fastmatchActiveList = pruner.prune(fastmatchActiveList);
pruneTimer.stop();
}
protected Token getFastMatchBestToken(SearchState state) {
return fastMatchBestTokenMap.get(state);
}
protected void setFastMatchBestToken(Token token, SearchState state) {
fastMatchBestTokenMap.put(state, token);
}
protected void collectFastMatchSuccessorTokens(Token token) {
SearchState state = token.getSearchState();
SearchStateArc[] arcs = state.getSuccessors();
// For each successor
// calculate the entry score for the token based upon the
// predecessor token score and the transition probabilities
// if the score is better than the best score encountered for
// the SearchState and frame then create a new token, add
// it to the lattice and the SearchState.
// If the token is an emitting token add it to the list,
// otherwise recursively collect the new tokens successors.
for (SearchStateArc arc : arcs) {
SearchState nextState = arc.getState();
// We're actually multiplying the variables, but since
// these come in log(), multiply gets converted to add
float logEntryScore = token.getScore() + arc.getProbability();
Token predecessor = getResultListPredecessor(token);
// if not emitting, check to see if we've already visited
// this state during this frame. Expand the token only if we
// haven't visited it already. This prevents the search
// from getting stuck in a loop of states with no
// intervening emitting nodes. This can happen with nasty
// jsgf grammars such as ((foo*)*)*
if (!nextState.isEmitting()) {
Token newToken = new Token(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentFastMatchFrameNumber);
tokensCreated.value++;
if (!isVisited(newToken)) {
collectFastMatchSuccessorTokens(newToken);
}
continue;
}
Token bestToken = getFastMatchBestToken(nextState);
if (bestToken == null) {
Token newToken = new Token(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentFastMatchFrameNumber);
tokensCreated.value++;
setFastMatchBestToken(newToken, nextState);
fastmatchActiveList.add(newToken);
} else {
if (bestToken.getScore() <= logEntryScore) {
bestToken.update(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentFastMatchFrameNumber);
}
}
}
}
/**
* Collects the next set of emitting tokens from a token and accumulates
* them in the active or result lists
*
* @param token
* the token to collect successors from be immediately expanded
* are placed. Null if we should always expand all nodes.
*/
@Override
protected void collectSuccessorTokens(Token token) {
// tokenTracker.add(token);
// tokenTypeTracker.add(token);
// If this is a final state, add it to the final list
if (token.isFinal()) {
resultList.add(getResultListPredecessor(token));
return;
}
// if this is a non-emitting token and we've already
// visited the same state during this frame, then we
// are in a grammar loop, so we don't continue to expand.
// This check only works properly if we have kept all of the
// tokens (instead of skipping the non-word tokens).
// Note that certain linguists will never generate grammar loops
// (lextree linguist for example). For these cases, it is perfectly
// fine to disable this check by setting keepAllTokens to false
if (!token.isEmitting() && (keepAllTokens && isVisited(token))) {
return;
}
SearchState state = token.getSearchState();
SearchStateArc[] arcs = state.getSuccessors();
Token predecessor = getResultListPredecessor(token);
// For each successor
// calculate the entry score for the token based upon the
// predecessor token score and the transition probabilities
// if the score is better than the best score encountered for
// the SearchState and frame then create a new token, add
// it to the lattice and the SearchState.
// If the token is an emitting token add it to the list,
// otherwise recursively collect the new tokens successors.
float tokenScore = token.getScore();
float beamThreshold = activeList.getBeamThreshold();
boolean stateProducesPhoneHmms = state instanceof LexTreeNonEmittingHMMState || state instanceof LexTreeWordState
|| state instanceof LexTreeEndUnitState;
for (SearchStateArc arc : arcs) {
SearchState nextState = arc.getState();
// prune states using lookahead heuristics
if (stateProducesPhoneHmms) {
if (nextState instanceof LexTreeHMMState) {
Float penalty;
int baseId = ((LexTreeHMMState) nextState).getHMMState().getHMM().getBaseUnit().getBaseID();
if ((penalty = penalties.get(baseId)) == null)
penalty = updateLookaheadPenalty(baseId);
if ((tokenScore + lookaheadWeight * penalty) < beamThreshold)
continue;
}
}
if (checkStateOrder) {
checkStateOrder(state, nextState);
}
// We're actually multiplying the variables, but since
// these come in log(), multiply gets converted to add
float logEntryScore = tokenScore + arc.getProbability();
Token bestToken = getBestToken(nextState);
if (bestToken == null) {
Token newBestToken = new Token(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentCollectTime);
tokensCreated.value++;
setBestToken(newBestToken, nextState);
activeListAdd(newBestToken);
} else if (bestToken.getScore() < logEntryScore) {
// System.out.println("Updating " + bestToken + " with " +
// newBestToken);
Token oldPredecessor = bestToken.getPredecessor();
bestToken.update(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentCollectTime);
if (buildWordLattice && nextState instanceof WordSearchState) {
loserManager.addAlternatePredecessor(bestToken, oldPredecessor);
}
} else if (buildWordLattice && nextState instanceof WordSearchState) {
if (predecessor != null) {
loserManager.addAlternatePredecessor(bestToken, predecessor);
}
}
}
}
private Float updateLookaheadPenalty(int baseId) {
if (ciScores.isEmpty())
return 0.0f;
float penalty = -Float.MAX_VALUE;
for (FrameCiScores frameCiScores : ciScores) {
float diff = frameCiScores.scores[baseId] - frameCiScores.maxScore;
if (diff > penalty)
penalty = diff;
}
penalties.put(baseId, penalty);
return penalty;
}
private class FrameCiScores {
public final float[] scores;
public final float maxScore;
public FrameCiScores(float[] scores, float maxScore) {
this.scores = scores;
this.maxScore = maxScore;
}
}
}

View file

@ -1,796 +0,0 @@
/*
* Copyright 1999-2002 Carnegie Mellon University.
* Portions Copyright 2002 Sun Microsystems, Inc.
* Portions Copyright 2002 Mitsubishi Electric Research Laboratories.
* All Rights Reserved. Use is subject to license terms.
*
* See the file "license.terms" for information on usage and
* redistribution of this file, and for a DISCLAIMER OF ALL
* WARRANTIES.
*
*/
package edu.cmu.sphinx.decoder.search;
// a test search manager.
import edu.cmu.sphinx.decoder.pruner.Pruner;
import edu.cmu.sphinx.decoder.scorer.AcousticScorer;
import edu.cmu.sphinx.frontend.Data;
import edu.cmu.sphinx.linguist.*;
import edu.cmu.sphinx.result.Result;
import edu.cmu.sphinx.util.LogMath;
import edu.cmu.sphinx.util.StatisticsVariable;
import edu.cmu.sphinx.util.Timer;
import edu.cmu.sphinx.util.TimerPool;
import edu.cmu.sphinx.util.props.*;
import java.io.IOException;
import java.util.*;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Provides the breadth first search. To perform recognition an application
* should call initialize before recognition begins, and repeatedly call
* <code> recognize </code> until Result.isFinal() returns true. Once a final
* result has been obtained, <code> stopRecognition </code> should be called.
* <p>
* All scores and probabilities are maintained in the log math log domain.
*/
public class WordPruningBreadthFirstSearchManager extends TokenSearchManager {
/**
* The property that defines the name of the linguist to be used by this
* search manager.
*/
@S4Component(type = Linguist.class)
public final static String PROP_LINGUIST = "linguist";
/**
* The property that defines the name of the linguist to be used by this
* search manager.
*/
@S4Component(type = Pruner.class)
public final static String PROP_PRUNER = "pruner";
/**
* The property that defines the name of the scorer to be used by this
* search manager.
*/
@S4Component(type = AcousticScorer.class)
public final static String PROP_SCORER = "scorer";
/**
* The property than, when set to <code>true</code> will cause the
* recognizer to count up all the tokens in the active list after every
* frame.
*/
@S4Boolean(defaultValue = false)
public final static String PROP_SHOW_TOKEN_COUNT = "showTokenCount";
/**
* The property that controls the number of frames processed for every time
* the decode growth step is skipped. Setting this property to zero disables
* grow skipping. Setting this number to a small integer will increase the
* speed of the decoder but will also decrease its accuracy. The higher the
* number, the less often the grow code is skipped. Values like 6-8 is known
* to be the good enough for large vocabulary tasks. That means that one of
* 6 frames will be skipped.
*/
@S4Integer(defaultValue = 0)
public final static String PROP_GROW_SKIP_INTERVAL = "growSkipInterval";
/** The property that defines the type of active list to use */
@S4Component(type = ActiveListManager.class)
public final static String PROP_ACTIVE_LIST_MANAGER = "activeListManager";
/** The property for checking if the order of states is valid. */
@S4Boolean(defaultValue = false)
public final static String PROP_CHECK_STATE_ORDER = "checkStateOrder";
/** The property that specifies the maximum lattice edges */
@S4Integer(defaultValue = 100)
public final static String PROP_MAX_LATTICE_EDGES = "maxLatticeEdges";
/**
* The property that controls the amount of simple acoustic lookahead
* performed. Setting the property to zero (the default) disables simple
* acoustic lookahead. The lookahead need not be an integer.
*/
@S4Double(defaultValue = 0)
public final static String PROP_ACOUSTIC_LOOKAHEAD_FRAMES = "acousticLookaheadFrames";
/** The property that specifies the relative beam width */
@S4Double(defaultValue = 0.0)
// TODO: this should be a more meaningful default e.g. the common 1E-80
public final static String PROP_RELATIVE_BEAM_WIDTH = "relativeBeamWidth";
// -----------------------------------
// Configured Subcomponents
// -----------------------------------
protected Linguist linguist; // Provides grammar/language info
protected Pruner pruner; // used to prune the active list
protected AcousticScorer scorer; // used to score the active list
private ActiveListManager activeListManager;
protected LogMath logMath;
// -----------------------------------
// Configuration data
// -----------------------------------
protected Logger logger;
protected boolean showTokenCount;
protected boolean checkStateOrder;
private int growSkipInterval;
protected float relativeBeamWidth;
protected float acousticLookaheadFrames;
private int maxLatticeEdges = 100;
// -----------------------------------
// Instrumentation
// -----------------------------------
protected Timer scoreTimer;
protected Timer pruneTimer;
protected Timer growTimer;
protected StatisticsVariable totalTokensScored;
protected StatisticsVariable curTokensScored;
protected StatisticsVariable tokensCreated;
private long tokenSum;
private int tokenCount;
// -----------------------------------
// Working data
// -----------------------------------
protected int currentFrameNumber; // the current frame number
protected long currentCollectTime; // the current frame number
protected ActiveList activeList; // the list of active tokens
protected List<Token> resultList; // the current set of results
protected Map<SearchState, Token> bestTokenMap;
protected AlternateHypothesisManager loserManager;
private int numStateOrder;
// private TokenTracker tokenTracker;
// private TokenTypeTracker tokenTypeTracker;
protected boolean streamEnd;
/**
* Creates a pruning manager withs separate lists for tokens
* @param linguist a linguist for search space
* @param pruner pruner to drop tokens
* @param scorer scorer to estimate token probability
* @param activeListManager active list manager to store tokens
* @param showTokenCount show count during decoding
* @param relativeWordBeamWidth relative beam for lookahead pruning
* @param growSkipInterval skip interval for grown
* @param checkStateOrder check order of states during growth
* @param buildWordLattice build a lattice during decoding
* @param maxLatticeEdges max edges to keep in lattice
* @param acousticLookaheadFrames frames to do lookahead
* @param keepAllTokens keep tokens including emitting tokens
*/
public WordPruningBreadthFirstSearchManager(Linguist linguist, Pruner pruner, AcousticScorer scorer,
ActiveListManager activeListManager, boolean showTokenCount, double relativeWordBeamWidth, int growSkipInterval,
boolean checkStateOrder, boolean buildWordLattice, int maxLatticeEdges, float acousticLookaheadFrames,
boolean keepAllTokens) {
this.logger = Logger.getLogger(getClass().getName());
this.logMath = LogMath.getLogMath();
this.linguist = linguist;
this.pruner = pruner;
this.scorer = scorer;
this.activeListManager = activeListManager;
this.showTokenCount = showTokenCount;
this.growSkipInterval = growSkipInterval;
this.checkStateOrder = checkStateOrder;
this.buildWordLattice = buildWordLattice;
this.maxLatticeEdges = maxLatticeEdges;
this.acousticLookaheadFrames = acousticLookaheadFrames;
this.keepAllTokens = keepAllTokens;
this.relativeBeamWidth = logMath.linearToLog(relativeWordBeamWidth);
}
public WordPruningBreadthFirstSearchManager() {
}
/*
* (non-Javadoc)
*
* @see
* edu.cmu.sphinx.util.props.Configurable#newProperties(edu.cmu.sphinx.util
* .props.PropertySheet)
*/
@Override
public void newProperties(PropertySheet ps) throws PropertyException {
super.newProperties(ps);
logMath = LogMath.getLogMath();
logger = ps.getLogger();
linguist = (Linguist) ps.getComponent(PROP_LINGUIST);
pruner = (Pruner) ps.getComponent(PROP_PRUNER);
scorer = (AcousticScorer) ps.getComponent(PROP_SCORER);
activeListManager = (ActiveListManager) ps.getComponent(PROP_ACTIVE_LIST_MANAGER);
showTokenCount = ps.getBoolean(PROP_SHOW_TOKEN_COUNT);
growSkipInterval = ps.getInt(PROP_GROW_SKIP_INTERVAL);
checkStateOrder = ps.getBoolean(PROP_CHECK_STATE_ORDER);
maxLatticeEdges = ps.getInt(PROP_MAX_LATTICE_EDGES);
acousticLookaheadFrames = ps.getFloat(PROP_ACOUSTIC_LOOKAHEAD_FRAMES);
relativeBeamWidth = logMath.linearToLog(ps.getDouble(PROP_RELATIVE_BEAM_WIDTH));
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.SearchManager#allocate()
*/
public void allocate() {
// tokenTracker = new TokenTracker();
// tokenTypeTracker = new TokenTypeTracker();
scoreTimer = TimerPool.getTimer(this, "Score");
pruneTimer = TimerPool.getTimer(this, "Prune");
growTimer = TimerPool.getTimer(this, "Grow");
totalTokensScored = StatisticsVariable.getStatisticsVariable("totalTokensScored");
curTokensScored = StatisticsVariable.getStatisticsVariable("curTokensScored");
tokensCreated = StatisticsVariable.getStatisticsVariable("tokensCreated");
try {
linguist.allocate();
pruner.allocate();
scorer.allocate();
} catch (IOException e) {
throw new RuntimeException("Allocation of search manager resources failed", e);
}
}
/*
* (non-Javadoc)
*
* @see edu.cmu.sphinx.decoder.search.SearchManager#deallocate()
*/
public void deallocate() {
try {
scorer.deallocate();
pruner.deallocate();
linguist.deallocate();
} catch (IOException e) {
throw new RuntimeException("Deallocation of search manager resources failed", e);
}
}
/**
* Called at the start of recognition. Gets the search manager ready to
* recognize
*/
public void startRecognition() {
linguist.startRecognition();
pruner.startRecognition();
scorer.startRecognition();
localStart();
}
/**
* Performs the recognition for the given number of frames.
*
* @param nFrames
* the number of frames to recognize
* @return the current result
*/
public Result recognize(int nFrames) {
boolean done = false;
Result result = null;
streamEnd = false;
for (int i = 0; i < nFrames && !done; i++) {
done = recognize();
}
if (!streamEnd) {
result = new Result(loserManager, activeList, resultList, currentCollectTime, done, linguist.getSearchGraph()
.getWordTokenFirst(), true);
}
// tokenTypeTracker.show();
if (showTokenCount) {
showTokenCount();
}
return result;
}
protected boolean recognize() {
activeList = activeListManager.getEmittingList();
boolean more = scoreTokens();
if (more) {
pruneBranches();
currentFrameNumber++;
if (growSkipInterval == 0 || (currentFrameNumber % growSkipInterval) != 0) {
clearCollectors();
growEmittingBranches();
growNonEmittingBranches();
}
}
return !more;
}
/**
* Clears lists and maps before next expansion stage
*/
private void clearCollectors() {
resultList = new LinkedList<Token>();
createBestTokenMap();
activeListManager.clearEmittingList();
}
/**
* creates a new best token map with the best size
*/
protected void createBestTokenMap() {
int mapSize = activeList.size() * 10;
if (mapSize == 0) {
mapSize = 1;
}
bestTokenMap = new HashMap<SearchState, Token>(mapSize, 0.3F);
}
/** Terminates a recognition */
public void stopRecognition() {
localStop();
scorer.stopRecognition();
pruner.stopRecognition();
linguist.stopRecognition();
}
/**
* Gets the initial grammar node from the linguist and creates a
* GrammarNodeToken
*/
protected void localStart() {
SearchGraph searchGraph = linguist.getSearchGraph();
currentFrameNumber = 0;
curTokensScored.value = 0;
numStateOrder = searchGraph.getNumStateOrder();
activeListManager.setNumStateOrder(numStateOrder);
if (buildWordLattice) {
loserManager = new AlternateHypothesisManager(maxLatticeEdges);
}
SearchState state = searchGraph.getInitialState();
activeList = activeListManager.getEmittingList();
activeList.add(new Token(state, -1));
clearCollectors();
growBranches();
growNonEmittingBranches();
// tokenTracker.setEnabled(false);
// tokenTracker.startUtterance();
}
/** Local cleanup for this search manager */
protected void localStop() {
// tokenTracker.stopUtterance();
}
/**
* Goes through the active list of tokens and expands each token, finding
* the set of successor tokens until all the successor tokens are emitting
* tokens.
*/
protected void growBranches() {
growTimer.start();
float relativeBeamThreshold = activeList.getBeamThreshold();
if (logger.isLoggable(Level.FINE)) {
logger.fine("Frame: " + currentFrameNumber + " thresh : " + relativeBeamThreshold + " bs "
+ activeList.getBestScore() + " tok " + activeList.getBestToken());
}
for (Token token : activeList) {
if (token.getScore() >= relativeBeamThreshold && allowExpansion(token)) {
collectSuccessorTokens(token);
}
}
growTimer.stop();
}
/**
* Grows the emitting branches. This version applies a simple acoustic
* lookahead based upon the rate of change in the current acoustic score.
*/
protected void growEmittingBranches() {
if (acousticLookaheadFrames <= 0.0f) {
growBranches();
return;
}
growTimer.start();
float bestScore = -Float.MAX_VALUE;
for (Token t : activeList) {
float score = t.getScore() + t.getAcousticScore() * acousticLookaheadFrames;
if (score > bestScore) {
bestScore = score;
}
}
float relativeBeamThreshold = bestScore + relativeBeamWidth;
for (Token t : activeList) {
if (t.getScore() + t.getAcousticScore() * acousticLookaheadFrames > relativeBeamThreshold)
collectSuccessorTokens(t);
}
growTimer.stop();
}
/**
* Grow the non-emitting branches, until the tokens reach an emitting state.
*/
private void growNonEmittingBranches() {
for (Iterator<ActiveList> i = activeListManager.getNonEmittingListIterator(); i.hasNext();) {
activeList = i.next();
if (activeList != null) {
i.remove();
pruneBranches();
growBranches();
}
}
}
/**
* Calculate the acoustic scores for the active list. The active list should
* contain only emitting tokens.
*
* @return <code>true</code> if there are more frames to score, otherwise,
* false
*/
protected boolean scoreTokens() {
boolean moreTokens;
scoreTimer.start();
Data data = scorer.calculateScores(activeList.getTokens());
scoreTimer.stop();
Token bestToken = null;
if (data instanceof Token) {
bestToken = (Token) data;
} else if (data == null) {
streamEnd = true;
}
if (bestToken != null) {
currentCollectTime = bestToken.getCollectTime();
}
moreTokens = (bestToken != null);
activeList.setBestToken(bestToken);
// monitorWords(activeList);
monitorStates(activeList);
// System.out.println("BEST " + bestToken);
curTokensScored.value += activeList.size();
totalTokensScored.value += activeList.size();
return moreTokens;
}
/**
* Keeps track of and reports all of the active word histories for the given
* active list
*
* @param activeList
* the active list to track
*/
@SuppressWarnings("unused")
private void monitorWords(ActiveList activeList) {
// WordTracker tracker1 = new WordTracker(currentFrameNumber);
//
// for (Token t : activeList) {
// tracker1.add(t);
// }
// tracker1.dump();
//
// TokenTracker tracker2 = new TokenTracker();
//
// for (Token t : activeList) {
// tracker2.add(t);
// }
// tracker2.dumpSummary();
// tracker2.dumpDetails();
//
// TokenTypeTracker tracker3 = new TokenTypeTracker();
//
// for (Token t : activeList) {
// tracker3.add(t);
// }
// tracker3.dump();
// StateHistoryTracker tracker4 = new
// StateHistoryTracker(currentFrameNumber);
// for (Token t : activeList) {
// tracker4.add(t);
// }
// tracker4.dump();
}
/**
* Keeps track of and reports statistics about the number of active states
*
* @param activeList
* the active list of states
*/
protected void monitorStates(ActiveList activeList) {
tokenSum += activeList.size();
tokenCount++;
if ((tokenCount % 1000) == 0) {
logger.info("Average Tokens/State: " + (tokenSum / tokenCount));
}
}
/** Removes unpromising branches from the active list */
protected void pruneBranches() {
pruneTimer.start();
activeList = pruner.prune(activeList);
pruneTimer.stop();
}
/**
* Gets the best token for this state
*
* @param state
* the state of interest
* @return the best token
*/
protected Token getBestToken(SearchState state) {
return bestTokenMap.get(state);
}
/**
* Sets the best token for a given state
*
* @param token
* the best token
* @param state
* the state
*/
protected void setBestToken(Token token, SearchState state) {
bestTokenMap.put(state, token);
}
/**
* Checks that the given two states are in legitimate order.
*
* @param fromState parent state
* @param toState child state
*/
protected void checkStateOrder(SearchState fromState, SearchState toState) {
if (fromState.getOrder() == numStateOrder - 1) {
return;
}
if (fromState.getOrder() > toState.getOrder()) {
throw new Error("IllegalState order: from " + fromState.getClass().getName() + ' ' + fromState.toPrettyString()
+ " order: " + fromState.getOrder() + " to " + toState.getClass().getName() + ' ' + toState.toPrettyString()
+ " order: " + toState.getOrder());
}
}
/**
* Collects the next set of emitting tokens from a token and accumulates
* them in the active or result lists
*
* @param token
* the token to collect successors from be immediately expanded
* are placed. Null if we should always expand all nodes.
*/
protected void collectSuccessorTokens(Token token) {
// tokenTracker.add(token);
// tokenTypeTracker.add(token);
// If this is a final state, add it to the final list
if (token.isFinal()) {
resultList.add(getResultListPredecessor(token));
return;
}
// if this is a non-emitting token and we've already
// visited the same state during this frame, then we
// are in a grammar loop, so we don't continue to expand.
// This check only works properly if we have kept all of the
// tokens (instead of skipping the non-word tokens).
// Note that certain linguists will never generate grammar loops
// (lextree linguist for example). For these cases, it is perfectly
// fine to disable this check by setting keepAllTokens to false
if (!token.isEmitting() && (keepAllTokens && isVisited(token))) {
return;
}
SearchState state = token.getSearchState();
SearchStateArc[] arcs = state.getSuccessors();
Token predecessor = getResultListPredecessor(token);
// For each successor
// calculate the entry score for the token based upon the
// predecessor token score and the transition probabilities
// if the score is better than the best score encountered for
// the SearchState and frame then create a new token, add
// it to the lattice and the SearchState.
// If the token is an emitting token add it to the list,
// otherwise recursively collect the new tokens successors.
for (SearchStateArc arc : arcs) {
SearchState nextState = arc.getState();
if (checkStateOrder) {
checkStateOrder(state, nextState);
}
// We're actually multiplying the variables, but since
// these come in log(), multiply gets converted to add
float logEntryScore = token.getScore() + arc.getProbability();
Token bestToken = getBestToken(nextState);
if (bestToken == null) {
Token newBestToken = new Token(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentCollectTime);
tokensCreated.value++;
setBestToken(newBestToken, nextState);
activeListAdd(newBestToken);
} else if (bestToken.getScore() < logEntryScore) {
// System.out.println("Updating " + bestToken + " with " +
// newBestToken);
Token oldPredecessor = bestToken.getPredecessor();
bestToken.update(predecessor, nextState, logEntryScore, arc.getInsertionProbability(),
arc.getLanguageProbability(), currentCollectTime);
if (buildWordLattice && nextState instanceof WordSearchState) {
loserManager.addAlternatePredecessor(bestToken, oldPredecessor);
}
} else if (buildWordLattice && nextState instanceof WordSearchState) {
if (predecessor != null) {
loserManager.addAlternatePredecessor(bestToken, predecessor);
}
}
}
}
/**
* Determines whether or not we've visited the state associated with this
* token since the previous frame.
*
* @param t token to check
* @return true if we've visited the search state since the last frame
*/
protected boolean isVisited(Token t) {
SearchState curState = t.getSearchState();
t = t.getPredecessor();
while (t != null && !t.isEmitting()) {
if (curState.equals(t.getSearchState())) {
System.out.println("CS " + curState + " match " + t.getSearchState());
return true;
}
t = t.getPredecessor();
}
return false;
}
protected void activeListAdd(Token token) {
activeListManager.add(token);
}
/**
* Determine if the given token should be expanded
*
* @param t
* the token to test
* @return <code>true</code> if the token should be expanded
*/
protected boolean allowExpansion(Token t) {
return true; // currently disabled
}
/**
* Counts all the tokens in the active list (and displays them). This is an
* expensive operation.
*/
protected void showTokenCount() {
Set<Token> tokenSet = new HashSet<Token>();
for (Token token : activeList) {
while (token != null) {
tokenSet.add(token);
token = token.getPredecessor();
}
}
System.out.println("Token Lattice size: " + tokenSet.size());
tokenSet = new HashSet<Token>();
for (Token token : resultList) {
while (token != null) {
tokenSet.add(token);
token = token.getPredecessor();
}
}
System.out.println("Result Lattice size: " + tokenSet.size());
}
/**
* Returns the ActiveList.
*
* @return the ActiveList
*/
public ActiveList getActiveList() {
return activeList;
}
/**
* Sets the ActiveList.
*
* @param activeList
* the new ActiveList
*/
public void setActiveList(ActiveList activeList) {
this.activeList = activeList;
}
/**
* Returns the result list.
*
* @return the result list
*/
public List<Token> getResultList() {
return resultList;
}
/**
* Sets the result list.
*
* @param resultList
* the new result list
*/
public void setResultList(List<Token> resultList) {
this.resultList = resultList;
}
/**
* Returns the current frame number.
*
* @return the current frame number
*/
public int getCurrentFrameNumber() {
return currentFrameNumber;
}
/**
* Returns the Timer for growing.
*
* @return the Timer for growing
*/
public Timer getGrowTimer() {
return growTimer;
}
/**
* Returns the tokensCreated StatisticsVariable.
*
* @return the tokensCreated StatisticsVariable.
*/
public StatisticsVariable getTokensCreated() {
return tokensCreated;
}
}

View file

@ -1,140 +0,0 @@
package edu.cmu.sphinx.decoder.search.stats;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.linguist.WordSequence;
/** A class that keeps track of word histories */
public class StateHistoryTracker {
final Map<WordSequence, WordStats> statMap;
final int frameNumber;
int stateCount;
int maxWordHistories;
/**
* Creates a word tracker for the given frame number
*
* @param frameNumber the frame number
*/
public StateHistoryTracker(int frameNumber) {
statMap = new HashMap<WordSequence, WordStats>();
this.frameNumber = frameNumber;
}
/**
* Adds a word history for the given token to the word tracker
*
* @param t the token to add
*/
public void add(Token t) {
stateCount++;
WordSequence ws = getWordSequence(t);
WordStats stats = statMap.get(ws);
if (stats == null) {
stats = new WordStats(ws);
statMap.put(ws, stats);
}
stats.update(t);
}
/** Dumps the word histories in the tracker */
public void dump() {
dumpSummary();
List<WordStats> stats = new ArrayList<WordStats>(statMap.values());
Collections.sort(stats, WordStats.COMPARATOR);
for (WordStats stat : stats) {
System.out.println(" " + stat);
}
}
/** Dumps summary information in the tracker */
void dumpSummary() {
System.out.println("Frame: " + frameNumber + " states: " + stateCount
+ " histories " + statMap.size());
}
/**
* Given a token, gets the history sequence
*
* @param token the token of interest
* @return the word sequence for the token
*/
private WordSequence getWordSequence(Token token) {
return token.getSearchState().getWordHistory();
}
/** Keeps track of statistics for a particular word sequence */
static class WordStats {
public final static Comparator<WordStats> COMPARATOR = new Comparator<WordStats>() {
public int compare(WordStats ws1, WordStats ws2) {
if (ws1.maxScore > ws2.maxScore) {
return -1;
} else if (ws1.maxScore == ws2.maxScore) {
return 0;
} else {
return 1;
}
}
};
private int size;
private float maxScore;
private float minScore;
private final WordSequence ws;
/**
* Creates a word statistics for the given sequence
*
* @param ws the word sequence
*/
WordStats(WordSequence ws) {
size = 0;
maxScore = -Float.MAX_VALUE;
minScore = Float.MAX_VALUE;
this.ws = ws;
}
/**
* Updates the statistics based upon the scores for the given token
*
* @param t the token
*/
void update(Token t) {
size++;
if (t.getScore() > maxScore) {
maxScore = t.getScore();
}
if (t.getScore() < minScore) {
minScore = t.getScore();
}
}
/**
* Returns a string representation of the statistics
*
* @return a string representation
*/
@Override
public String toString() {
return "states:" + size + " max:" + maxScore + " min:" + minScore + ' '
+ ws;
}
}
}

View file

@ -1,198 +0,0 @@
package edu.cmu.sphinx.decoder.search.stats;
import java.util.HashMap;
import java.util.Map;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.linguist.HMMSearchState;
/** This debugging class is used to track the number of active tokens per state */
public class TokenTracker {
private Map<Object, TokenStats> stateMap;
private boolean enabled;
private int frame;
private int utteranceStateCount;
private int utteranceMaxStates;
private int utteranceSumStates;
/**
* Enables or disables the token tracker
*
* @param enabled if <code>true</code> the tracker is enabled
*/
void setEnabled(boolean enabled) {
this.enabled = enabled;
}
/** Starts the per-utterance tracking */
void startUtterance() {
if (enabled) {
frame = 0;
utteranceStateCount = 0;
utteranceMaxStates = -Integer.MAX_VALUE;
utteranceSumStates = 0;
}
}
/** stops the per-utterance tracking */
void stopUtterance() {
if (enabled) {
dumpSummary();
}
}
/** Starts the per-frame tracking */
void startFrame() {
if (enabled) {
stateMap = new HashMap<Object, TokenStats>();
}
}
/**
* Adds a new token to the tracker
*
* @param t the token to add.
*/
public void add(Token t) {
if (enabled) {
TokenStats stats = getStats(t);
stats.update(t);
}
}
/** Stops the per-frame tracking */
void stopFrame() {
if (enabled) {
frame++;
dumpDetails();
}
}
/** Dumps summary info about the tokens */
public void dumpSummary() {
if (enabled) {
float avgStates = 0f;
if (utteranceStateCount > 0) {
avgStates = ((float) utteranceSumStates) / utteranceStateCount;
}
System.out.print("# Utterance stats ");
System.out.print(" States: " + utteranceStateCount / frame);
if (utteranceStateCount > 0) {
System.out.print(" Paths: " + utteranceSumStates / frame);
System.out.print(" Max: " + utteranceMaxStates);
System.out.print(" Avg: " + avgStates);
}
System.out.println();
}
}
/** Dumps detailed info about the tokens */
public void dumpDetails() {
if (enabled) {
int maxStates = -Integer.MAX_VALUE;
int hmmCount = 0;
int sumStates = 0;
for (TokenStats stats : stateMap.values()) {
if (stats.isHMM) {
hmmCount++;
}
sumStates += stats.count;
utteranceSumStates += stats.count;
if (stats.count > maxStates) {
maxStates = stats.count;
}
if (stats.count > utteranceMaxStates) {
utteranceMaxStates = stats.count;
}
}
utteranceStateCount += stateMap.size();
float avgStates = 0f;
if (!stateMap.isEmpty()) {
avgStates = ((float) sumStates) / stateMap.size();
}
System.out.print("# Frame " + frame);
System.out.print(" States: " + stateMap.size());
if (!stateMap.isEmpty()) {
System.out.print(" Paths: " + sumStates);
System.out.print(" Max: " + maxStates);
System.out.print(" Avg: " + avgStates);
System.out.print(" HMM: " + hmmCount);
}
System.out.println();
}
}
/**
* Gets the statistics for a particular token
*
* @param t the token of interest
* @return the token statistics associated with the given token
*/
private TokenStats getStats(Token t) {
TokenStats stats = stateMap.get(t.getSearchState()
.getLexState());
if (stats == null) {
stats = new TokenStats();
stateMap.put(t.getSearchState().getLexState(), stats);
}
return stats;
}
/**
* A class for keeping track of statistics about tokens. Tracks the count,
* minimum and maximum score for a particular state.
*/
class TokenStats {
int count;
float maxScore;
float minScore;
boolean isHMM;
TokenStats() {
count = 0;
maxScore = -Float.MAX_VALUE;
minScore = Float.MIN_VALUE;
}
/** Update this state with the given token
* @param t*/
public void update(Token t) {
count++;
if (t.getScore() > maxScore) {
maxScore = t.getScore();
}
if (t.getScore() < minScore) {
minScore = t.getScore();
}
isHMM = t.getSearchState() instanceof HMMSearchState;
}
}
}

View file

@ -1,80 +0,0 @@
package edu.cmu.sphinx.decoder.search.stats;
import edu.cmu.sphinx.decoder.search.Token;
import edu.cmu.sphinx.linguist.HMMSearchState;
import edu.cmu.sphinx.linguist.SearchState;
import edu.cmu.sphinx.linguist.UnitSearchState;
import edu.cmu.sphinx.linguist.WordSearchState;
import edu.cmu.sphinx.linguist.acoustic.HMM;
/**
* A tool for tracking the types tokens created and placed in the beam
* <p>
* TODO: Develop a mechanism for adding trackers such as these in a more general fashion.
*/
public class TokenTypeTracker {
// keep track of the various types of states
private int numWords;
private int numUnits;
private int numOthers;
private int numHMMBegin;
private int numHMMEnd;
private int numHMMSingle;
private int numHMMInternal;
private int numTokens;
/**
* Adds a token to this tracker. Records statistics about the type of token.
*
* @param t the token to track
*/
public void add(Token t) {
numTokens++;
SearchState s = t.getSearchState();
if (s instanceof WordSearchState) {
numWords++;
} else if (s instanceof UnitSearchState) {
numUnits++;
} else if (s instanceof HMMSearchState) {
HMM hmm = ((HMMSearchState) s).getHMMState().getHMM();
switch (hmm.getPosition()) {
case BEGIN: numHMMBegin++; break;
case END: numHMMEnd++; break;
case SINGLE: numHMMSingle++; break;
case INTERNAL: numHMMInternal++; break;
default: break;
}
} else {
numOthers++;
}
}
/** Shows the accumulated statistics */
public void dump() {
System.out.println("TotalTokens: " + numTokens);
System.out.println(" Words: " + numWords + pc(numWords));
System.out.println(" Units: " + numUnits + pc(numUnits));
System.out.println(" HMM-b: " + numHMMBegin + pc(numHMMBegin));
System.out.println(" HMM-e: " + numHMMEnd + pc(numHMMEnd));
System.out.println(" HMM-s: " + numHMMSingle + pc(numHMMSingle));
System.out.println(" HMM-i: " + numHMMInternal +
pc(numHMMInternal));
System.out.println(" Others: " + numOthers + pc(numOthers));
}
/**
* Utility method for generating integer percents
*
* @param num the value to be converted into percent
* @return a string representation as a percent
*/
private String pc(int num) {
int percent = ((100 * num) / numTokens);
return " (" + percent + "%)";
}
}

Some files were not shown because too many files have changed in this diff Show more