Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20090254529A1
Filed: 2008-04-04
Issued: 2009-10-08
Patent Holder: (Original Assignee) Lev Goldentouch     
Inventor(s): Lev Goldentouch

Title: Systems, methods and computer program products for content management

[FEATURE ID: 1] materialsmaterial, it, therein, thereof, information[FEATURE ID: 1] associated information metadata
[TRANSITIVE ID: 2] comprising, including, usingof, having, being, includes, containing, with, for[TRANSITIVE ID: 2] comprising
[FEATURE ID: 3] wireless communication component, storage medium, microphone, sensormemory, receiver, controller, database, processor, device, transducer[FEATURE ID: 3] web user
[FEATURE ID: 4] first piece, text sub fileportion, first, spreadsheet, page, summary, message, form[FEATURE ID: 4] second internet document
[FEATURE ID: 5] logic sub filemetadata, content, document, portion, tag[FEATURE ID: 5] other web user
[FEATURE ID: 6] first limited voice recognition dictionary, second limited voice recognition dictionary, limited image dictionaryquery, document, request, database, message, file, template[FEATURE ID: 6] source internet document
[TRANSITIVE ID: 7] received, basedprovided, generated, obtained, retrieved, captured, acquired, accessed[TRANSITIVE ID: 7] selected
[TRANSITIVE ID: 8] analyzedetermine, receive, identify, generate, obtain[TRANSITIVE ID: 8] provide
[FEATURE ID: 9] data, first voice inputs, second voice inputssignals, communications, commands, words, audio, text, instructions[FEATURE ID: 9] information
[FEATURE ID: 10] queryuser, request, response, command, querying, call, manipulation[FEATURE ID: 10] graphic display, query, selection
[FEATURE ID: 11] area, imageidentifier, address, object, input, event, environment, application[FEATURE ID: 11] internet connection
[FEATURE ID: 12] responsedisplay, document, content, view, page, selection, representation[FEATURE ID: 12] user selection, portion, browser
[FEATURE ID: 13] voice inputsinput, the, receipt, receiving, recognition, data, modification[FEATURE ID: 13] metadata, spatial parameter responsive
[FEATURE ID: 14] claimthe claim, statement, aspect, example, claimed, embodiment, clam[FEATURE ID: 14] claim
1 . An apparatus applicable for a user to consume materials [FEATURE ID: 1]

, the apparatus comprising [TRANSITIVE ID: 2]

: a controller ; a wireless communication component [FEATURE ID: 3]

; and a storage medium [FEATURE ID: 3]

to store at least a first piece [FEATURE ID: 4]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 4]

including [TRANSITIVE ID: 2]

a first piece of text , an audio sub file including a first piece of audio , and a logic sub file [FEATURE ID: 5]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 6]

configured to be wirelessly received [TRANSITIVE ID: 7]

by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 2]

instructions in the first piece of instructions , the controller is configured to : analyze [TRANSITIVE ID: 8]

, using at least the first limited voice recognition dictionary , data [FEATURE ID: 9]

based [TRANSITIVE ID: 7]

on first voice inputs [FEATURE ID: 9]

received via at least a microphone [FEATURE ID: 3]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 10]

; and identify an area [FEATURE ID: 11]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 12]

to present to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 6]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 13]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 14]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 11]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 6]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset . 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 3]

at least for images , wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 9]

1 . A method for managing content , the method comprising [TRANSITIVE ID: 2]

: acquiring a source internet document [FEATURE ID: 6]

over an internet connection [FEATURE ID: 11]

; dividing the source internet document into multiple granular elements , to provide [TRANSITIVE ID: 8]

a group of interrelated granular elements ; adding metadata fields to the granular elements ; receiving granular element associated information [FEATURE ID: 9]

from a web user [FEATURE ID: 3]

, wherein the granular element associated information is associated with a referenced granular element that is selected [TRANSITIVE ID: 7]

in response to user selection [FEATURE ID: 12]

; generating a second internet document [FEATURE ID: 4]

that comprises the referenced granular element , other granular elements , and the granular element associated information , wherein the generating is responsive to metadata [FEATURE ID: 13]

of at least one granular element ; and providing the second internet document over an internet connection . 2 . The method according to claim [FEATURE ID: 14]

1 , wherein a spatial relationship is defined for each of a group of granular elements in respect to at least one other granular element . 3 . The method according to claim 1 , wherein the providing of the second internet document comprises providing the second internet document to another web user . 4 . The method according to claim 1 , further comprising providing a user interface for marking a portion [FEATURE ID: 12]

of the source internet document , wherein the referenced granular element is at least partly referenced by a marking of tile user . 5 . The method according to claim 4 , wherein the dividing is followed by generating at least one granular element in response to the marking of the user . 6 . The method according to claim 4 , further comprising recording a marking of the user using spatial parameter responsive [FEATURE ID: 13]

to a graphic display [FEATURE ID: 10]

of a browser [FEATURE ID: 12]

- within - browser module . 7 . The method according to claim 1 , wherein the providing of the second internet document is responsive to a received query [FEATURE ID: 10]

for the source internet document , and comprises providing the second internet document instead of the source internet document . 8 . The method according to claim 1 , wherein the receiving of the granular element associated information is followed by storing associated information metadata [FEATURE ID: 1]

that comprises metadata pertaining to the web user , wherein the second internet document comprises at least part of the associated information metadata . 9 . The method according to claim 1 , further comprising : receiving additional granular element associated information from another web user , wherein the additional granular element associated information is associated with a referenced granular element which is selected in response to a selection [FEATURE ID: 10]

of the other web user [FEATURE ID: 5]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7584103B2
Filed: 2004-08-20
Issued: 2009-09-01
Patent Holder: (Original Assignee) Multimodal Tech Inc     (Current Assignee) MULTIMODAL TECHNOLOGIES LLC
Inventor(s): Juergen Fritsch, Michael Finke, Detlef Koll, Monika Woszczyna, Girija Yegnanarayanan

Title: Automated extraction of semantic content and generation of a structured document from speech

[FEATURE ID: 1] apparatus applicable, apparatus, areainformation, device, system, equipment, method, application, electronic apparatus[FEATURE ID: 1] apparatus
[TRANSITIVE ID: 2] consume, presentprovide, generate, obtain, retrieve, prepare, render, communicate[TRANSITIVE ID: 2] produce
[TRANSITIVE ID: 3] comprisingincluding, comprises, having, involving, featuring, incorporating, with[TRANSITIVE ID: 3] comprising
[FEATURE ID: 4] controller, microphonecpu, device, server, microprocessor, network, circuit, memory[FEATURE ID: 4] computer, speech recognition decoder
[FEATURE ID: 5] storage medium, logic sub file, second limited voice recognition dictionary, limited image dictionarydatabase, library, document, memory, grammar, code, file[FEATURE ID: 5] probabilistic language model
[FEATURE ID: 6] text sub file, audio, responsemessage, content, signal, language, text, page, sentence[FEATURE ID: 6] audio stream, document, second hierarchy, rendition
[TRANSITIVE ID: 7] includingusing, representing, as, to, receiving, and, storing[TRANSITIVE ID: 7] identifying, including
[FEATURE ID: 8] text, first voice inputsspeech, words, logic, audio, sentences, information, data[FEATURE ID: 8] content
[FEATURE ID: 9] first limited voice recognition dictionarydictionary, database, vocabulary, document, lexicon, grammar, first[FEATURE ID: 9] first hierarchy
[TRANSITIVE ID: 10] configured, tailoredused, implemented, coupled, compatible, constructed, adapted, sized[TRANSITIVE ID: 10] associated
[TRANSITIVE ID: 11] usingprocessing, use, employing, applying, utilizing, implementing[TRANSITIVE ID: 11] using
[TRANSITIVE ID: 12] analyzeprocess, evaluate, compare, identify, decode[TRANSITIVE ID: 12] apply
[FEATURE ID: 13] datainformation, text, words, parameters, content[FEATURE ID: 13] language models
[FEATURE ID: 14] querytext, message, term, subject, document, location, predicate[FEATURE ID: 14] semantic concept
[FEATURE ID: 15] claimany, paragraph, claim of, preceding claim, the claim, item, clause[FEATURE ID: 15] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume [TRANSITIVE ID: 2]

materials , the apparatus [FEATURE ID: 1]

comprising [TRANSITIVE ID: 3]

: a controller [FEATURE ID: 4]

; a wireless communication component ; and a storage medium [FEATURE ID: 5]

to store at least a first piece of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 6]

including [TRANSITIVE ID: 7]

a first piece of text [FEATURE ID: 8]

, an audio sub file including a first piece of audio [FEATURE ID: 6]

, and a logic sub file [FEATURE ID: 5]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 9]

configured [TRANSITIVE ID: 10]

to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored [TRANSITIVE ID: 10]

for the first piece of audio , wherein at least using [TRANSITIVE ID: 11]

instructions in the first piece of instructions , the controller is configured to : analyze [TRANSITIVE ID: 12]

, using at least the first limited voice recognition dictionary , data [FEATURE ID: 13]

based on first voice inputs [FEATURE ID: 8]

received via at least a microphone [FEATURE ID: 4]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 14]

; and identify an area [FEATURE ID: 1]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 6]

to present [FEATURE ID: 2]

to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 5]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 15]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image , and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 5]

1 . A computer [FEATURE ID: 4]

- implemented method comprising [TRANSITIVE ID: 3]

steps of : ( A ) identifying [TRANSITIVE ID: 7]

a probabilistic language model [FEATURE ID: 5]

including [TRANSITIVE ID: 7]

a plurality of probabilistic language models associated [TRANSITIVE ID: 10]

with a plurality of concepts logically organized in a first hierarchy [FEATURE ID: 9]

; ( B ) using [TRANSITIVE ID: 11]

a speech recognition decoder [FEATURE ID: 4]

to apply [TRANSITIVE ID: 12]

the probabilistic language model to a spoken audio stream [FEATURE ID: 6]

to produce [TRANSITIVE ID: 2]

a document [FEATURE ID: 6]

including content [FEATURE ID: 8]

organized into a plurality of sub-structures logically organized in a second hierarchy [FEATURE ID: 6]

having a logical structure defined by a path through the first hierarchy , comprising : ( B ) ( 1 ) identifying a path through the first hierarchy , comprising : ( B ) ( 1 ) ( a ) identifying a plurality of paths through the first hierarchy ; ( B ) ( 1 ) ( b ) for each of the plurality of paths P , producing a candidate structured document for the spoken audio stream by using the speech recognition decoder to recognize the spoken audio stream using the language models [FEATURE ID: 13]

on path P ; ( B ) ( 1 ) ( c ) applying a metric to the plurality of candidate structured documents produced in step ( B ) ( 1 ) ( b ) to produce a plurality of fitness scores for the plurality of candidate structured documents ; and ( B ) ( 1 ) ( d ) selecting the path which produces the candidate structured document having the highest fitness score ; ( B ) ( 2 ) generating a document having a structure corresponding to the path identified in step ( B ) ( 1 ) . 2 . The method of claim [FEATURE ID: 15]

1 , wherein the step ( B ) ( 2 ) comprises a step of traversing the path through the first hierarchy to generate the document . 3 . The method of claim 1 , wherein the step ( B ) ( 1 ) comprises a step of identifying a path through the first hierarchy which , when applied by a speech recognition decoder to recognize the spoken audio stream , produces an optimal recognition result with respect to the first hierarchy of the plurality of probabilistic language models . 4 . The method of claim 1 , wherein the plurality of sub-structures includes a sub-structure representing a semantic concept [FEATURE ID: 14]

. 5 . The method of claim 4 , wherein the semantic concept comprises a date . 6 . The method of claim 4 , wherein the semantic concept comprises a medication . 7 . The method of claim 4 , wherein the semantic concept is represented in the document in a computer - readable form . 8 . The method of claim 1 , wherein the plurality of probabilistic language models includes at least one n - gram language model . 9 . The method of claim 1 , further comprising a step of : ( C ) rendering the document to produce a rendition [FEATURE ID: 6]

indicating the structure of the document . 10 . The method of claim 1 , wherein the plurality of probabilistic language models includes at least one finite state language model . 11 . The method of claim 10 , wherein the plurality of probabilistic language models includes at least one n - gram language model . 12 . An apparatus [FEATURE ID: 1]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7581170B2
Filed: 2001-05-31
Issued: 2009-08-25
Patent Holder: (Original Assignee) Lixto Software GmbH     (Current Assignee) Lixto Software GmbH
Inventor(s): Robert Baumgartner, Sergio I'Lesca, Georg Gottlob, Marcus Herzoo

Title: Visual and interactive wrapper generation, automated information extraction from Web pages, and translation into XML

[FEATURE ID: 1] user, storage medium, sensorprocessor, database, device, source, controller, mechanism, memory[FEATURE ID: 1] user
[FEATURE ID: 2] materials, second voice inputsdata, information, items, instructions, audio, texts, words[FEATURE ID: 2] elements
[TRANSITIVE ID: 3] comprising, usingwith, having, of, including, through, containing, for[TRANSITIVE ID: 3] comprising
[FEATURE ID: 4] controller, wireless communication component, first limited voice recognition dictionary, microphone, headsetdevice, server, computer, display, detector, sensor, database[FEATURE ID: 4] example page
[FEATURE ID: 5] text sub file, logic sub file, query, responsedocument, database, template, form, message, description, grammar[FEATURE ID: 5] production document, name, filter
[TRANSITIVE ID: 6] includingdisplaying, defining, indicating, identifying, providing, representing, describing[TRANSITIVE ID: 6] selecting, declaring, generating
[FEATURE ID: 7] audio sub file, area, interest, imageevent, object, item, attribute, identifier, index, expression[FEATURE ID: 7] internal condition
[FEATURE ID: 8] audio, first voice inputscontent, text, information, image, speech, voice, sounds[FEATURE ID: 8] example
[TRANSITIVE ID: 9] configureddisposed, positioned, provided, arranged[TRANSITIVE ID: 9] occurring
[TRANSITIVE ID: 10] receivedgenerated, obtained, used, provided[TRANSITIVE ID: 10] conditions
[FEATURE ID: 11] instructions, voice inputsdata, information, items, characteristics, definitions, features, those[FEATURE ID: 11] extraction patterns, instances, refinement conditions, interactive commands
[FEATURE ID: 12] datapatterns, information, text, content[FEATURE ID: 12] documents
[TRANSITIVE ID: 13] basedcorresponding, associated, content, related[TRANSITIVE ID: 13] automated
[FEATURE ID: 14] second limited voice recognition dictionaryfile, text, data, document[FEATURE ID: 14] production documents
[FEATURE ID: 15] limited image dictionarypattern, template, list, filter, model, parameter, request[FEATURE ID: 15] pattern name, document
1 . An apparatus applicable for a user [FEATURE ID: 1]

to consume materials [FEATURE ID: 2]

, the apparatus comprising [TRANSITIVE ID: 3]

: a controller [FEATURE ID: 4]

; a wireless communication component [FEATURE ID: 4]

; and a storage medium [FEATURE ID: 1]

to store at least a first piece of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 5]

including [TRANSITIVE ID: 6]

a first piece of text , an audio sub file [FEATURE ID: 7]

including a first piece of audio [FEATURE ID: 8]

, and a logic sub file [FEATURE ID: 5]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 4]

configured [TRANSITIVE ID: 9]

to be wirelessly received [TRANSITIVE ID: 10]

by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 3]

instructions [FEATURE ID: 11]

in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 12]

based [TRANSITIVE ID: 13]

on first voice inputs [FEATURE ID: 8]

received via at least a microphone [FEATURE ID: 4]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 5]

; and identify an area [FEATURE ID: 7]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 5]

to present to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 14]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 11]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 7]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 7]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 15]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 4]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 1]

at least for images , wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 2]

1 . A method for visual and interactive generation of wrappers for documents [FEATURE ID: 12]

, and for automated [TRANSITIVE ID: 13]

information extraction comprising [TRANSITIVE ID: 3]

: defining extraction patterns [FEATURE ID: 11]

on at least one example page [FEATURE ID: 4]

, by visually and interactively selecting [TRANSITIVE ID: 6]

example [FEATURE ID: 8]

- elements [FEATURE ID: 2]

occurring [TRANSITIVE ID: 9]

on the example - page ; visually and interactively declaring [TRANSITIVE ID: 6]

properties of the extraction patterns ; generating [TRANSITIVE ID: 6]

a wrapper ; applying the wrapper to at least one production document [FEATURE ID: 5]

; and automatically extracting matching instances [FEATURE ID: 11]

of the extraction patterns from the production documents [FEATURE ID: 14]

wherein the processes of generation of a pattern further comprises : a ) receiving from a user [FEATURE ID: 1]

a pattern name [FEATURE ID: 15]

and storing said name [FEATURE ID: 5]

; b ) creating and storing a filter [FEATURE ID: 5]

for the pattern ; c ) visualizing the set of instances of the filter on at least one example document by evaluating the filter over the document [FEATURE ID: 15]

and visualizing all data elements of the document that are matching instances , whereby a user can test the filter ; d ) modifying a previously created filter by adding to it refinement conditions [FEATURE ID: 11]

that the instances of the filter must fulfill , where the refinement conditions are obtained from a user by receiving interactive commands [FEATURE ID: 11]

from the user and where the refinement conditions are combined with those conditions [FEATURE ID: 10]

for the filter that were added earlier ; e ) visualizing simultaneously all instances of all filters of the given pattern on at least one document by evaluating its corresponding pattern description against the document , whereby a user can test the pattern description constructed so far , wherein one of said refinement conditions is an internal condition [FEATURE ID: 7]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7574653B2
Filed: 2002-10-11
Issued: 2009-08-11
Patent Holder: (Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC
Inventor(s): Joseph Keith Croney, Greg David Schechter

Title: Adaptive image formatting control

[FEATURE ID: 1] apparatus applicable, area, interest, image, car, environmentinterface, application, appearance, event, item, index, device[FEATURE ID: 1] image, indication, mobile client device, image suitable, image format
[TRANSITIVE ID: 2] comprising, usingwith, having, of, including, implementing, following, containing[TRANSITIVE ID: 2] comprising
[FEATURE ID: 3] controller, wireless communication component, microphone, query, limited image dictionary, headset, sensordisplay, user, component, database, device, memory, camera[FEATURE ID: 3] computer, web application, server device, mobile client device ', browser
[FEATURE ID: 4] storage mediumfile, media, database, mechanism, system, structure, buffer[FEATURE ID: 4] method executable, memory
[TRANSITIVE ID: 5] storecontain, provide, include, comprise, have[TRANSITIVE ID: 5] indicate
[FEATURE ID: 6] text sub file, logic sub filedescription, portion, script, code, command, profile, tag[FEATURE ID: 6] header
[TRANSITIVE ID: 7] includingusing, defining, providing, indicating, determining, displaying, identifying[TRANSITIVE ID: 7] designating, maintaining
[FEATURE ID: 8] audio sub fileanimation, image, item, output, object, indication, identifier[FEATURE ID: 8] aspect ratio, image file, image file suitable
[FEATURE ID: 9] first limited voice recognition dictionaryfile, document, content, command, message, display, browser[FEATURE ID: 9] request, web page
[TRANSITIVE ID: 10] tailoredused, suitable, specific, adapted[TRANSITIVE ID: 10] applied
[FEATURE ID: 11] instructionsoperations, ones, users, commands, images, parameters, data[FEATURE ID: 11] characteristics, other images
[FEATURE ID: 12] datatext, metadata, parameters, instructions[FEATURE ID: 12] information
[TRANSITIVE ID: 13] basedassociated, received, corresponding, generated[TRANSITIVE ID: 13] requested
[FEATURE ID: 14] responsecontent, location, display, configuration, position, representation, characteristics[FEATURE ID: 14] dimension, display characteristics
[FEATURE ID: 15] second limited voice recognition dictionaryfile, document, memory, request[FEATURE ID: 15] suitable image file
[FEATURE ID: 16] claimany, embodiment, the claim, item, figure, paragraph, clam[FEATURE ID: 16] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume materials , the apparatus comprising [TRANSITIVE ID: 2]

: a controller [FEATURE ID: 3]

; a wireless communication component [FEATURE ID: 3]

; and a storage medium [FEATURE ID: 4]

to store [TRANSITIVE ID: 5]

at least a first piece of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 6]

including [TRANSITIVE ID: 7]

a first piece of text , an audio sub file [FEATURE ID: 8]

including a first piece of audio , and a logic sub file [FEATURE ID: 6]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 9]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored [TRANSITIVE ID: 10]

for the first piece of audio , wherein at least using [TRANSITIVE ID: 2]

instructions [FEATURE ID: 11]

in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 12]

based [TRANSITIVE ID: 13]

on first voice inputs received via at least a microphone [FEATURE ID: 3]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 3]

; and identify an area [FEATURE ID: 1]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 14]

to present to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 15]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 16]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 1]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 1]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 3]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 3]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car [FEATURE ID: 1]

. 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 3]

at least for images , wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment [FEATURE ID: 1]

1 . A method executable [FEATURE ID: 4]

on a computer [FEATURE ID: 3]

for generating an image [FEATURE ID: 1]

, comprising [TRANSITIVE ID: 2]

: a web application [FEATURE ID: 3]

designating [TRANSITIVE ID: 7]

conversion characteristics at a server device [FEATURE ID: 3]

, the conversion characteristics being associated with characteristics [FEATURE ID: 11]

of a plurality of different mobile client devices , wherein the conversion characteristics indicate [TRANSITIVE ID: 5]

at least one of a scale factor for an image in relation to a dimension [FEATURE ID: 14]

of a mobile client device ' [FEATURE ID: 3]

s display , an indication [FEATURE ID: 1]

for maintaining [TRANSITIVE ID: 7]

an aspect ratio [FEATURE ID: 8]

of the image , and a dither method applied [TRANSITIVE ID: 10]

to the image ; the web application designating a priority factor for the image , wherein the priority factor indicates whether the requested [TRANSITIVE ID: 13]

image will be displayed on the mobile client device [FEATURE ID: 1]

before each of a plurality of other images [FEATURE ID: 11]

; receiving a request [FEATURE ID: 9]

for a web page [FEATURE ID: 9]

at the server device from a browser [FEATURE ID: 3]

operating on the mobile client device , wherein the web page is created by the web application and the web page includes the image ; the server device determining the display characteristics [FEATURE ID: 14]

of the mobile client device from the request , wherein the request includes a header [FEATURE ID: 6]

with information [FEATURE ID: 12]

indicating display characteristics of the mobile client device ; and the server device modifying an image file [FEATURE ID: 8]

corresponding to the image for generating an image suitable [FEATURE ID: 1]

for rendering on the display of the mobile client device in accordance with the display characteristics of the mobile client device and the conversion characteristics . 2 . The method according to claim [FEATURE ID: 16]

1 further including storing the image suitable for rendering on the display of the mobile client device in memory [FEATURE ID: 4]

. 3 . The method according to claim 2 further including : determining if an image file suitable [FEATURE ID: 8]

for display on a mobile client device has been generated ; and if a suitable image file [FEATURE ID: 15]

has been generated , retrieving from memory the generated image file for transmission to the mobile client device . 4 . The method according to claim 1 wherein determining the display characteristics of the mobile client device includes receiving information identifying the mobile client device . 5 . The method according to claim 1 wherein modifying the image file includes converting the image file into a format compatible with the image format [FEATURE ID: 1]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20090094537A1
Filed: 2007-10-05
Issued: 2009-04-09
Patent Holder: (Original Assignee) Travis Alber     
Inventor(s): Travis Alber

Title: Method for allowing users of a document to pass messages to each other in a context-specific manner

[FEATURE ID: 1] apparatus applicableapparatus, interface, system, application, information[FEATURE ID: 1] software
[FEATURE ID: 2] usersingle user, subscriber, users, customer, human, device[FEATURE ID: 2] user
[TRANSITIVE ID: 3] consume, store, analyze, presentaccess, communicate, read, receive, provide, generate, exchange[TRANSITIVE ID: 3] converse, comment
[TRANSITIVE ID: 4] comprising, usinghaving, with, of, including, from, comprises, includes[TRANSITIVE ID: 4] comprising
[FEATURE ID: 5] storage medium, logic sub file, first limited voice recognition dictionary, limited image dictionarydatabase, document, file, container, grammar, system, template[FEATURE ID: 5] single document, virtual space
[FEATURE ID: 6] first piecekind, part, number, set, portion[FEATURE ID: 6] combination
[FEATURE ID: 7] text sub file, second limited voice recognition dictionarytext, message, script, data, sentence, grammar, database[FEATURE ID: 7] document
[TRANSITIVE ID: 8] includingdisplaying, using, defining, storing[TRANSITIVE ID: 8] providing
[FEATURE ID: 9] textvideo, document, photos, spreadsheet, multimedia, pictorial, image data[FEATURE ID: 9] audio, binary file
[FEATURE ID: 10] audioimage, digital, text, audiovisual, multimedia[FEATURE ID: 10] video
[TRANSITIVE ID: 11] configured, tailoredimplemented, used, provided, designed, enabled, modified, stored[TRANSITIVE ID: 11] defined, made
[TRANSITIVE ID: 12] receiveddetected, stored, captured, defined, recorded, determined, displayed[TRANSITIVE ID: 12] tracked, transmitted
[FEATURE ID: 13] instructions, voice inputsdata, the, information, communication, actions[FEATURE ID: 13] conversation
[FEATURE ID: 14] first voice inputscontent, messages, comments, information, voice, data, text[FEATURE ID: 14] positions, proximity filter settings
[FEATURE ID: 15] querytext, task, location, document, user[FEATURE ID: 15] context
[FEATURE ID: 16] interest, environmentaction, activity, orientation, appearance, distance, identity, size[FEATURE ID: 16] location, range, proximity preference
[FEATURE ID: 17] imagesdetection, imagery, the, monitoring, capturing, data, pictures[FEATURE ID: 17] text, filtering
[FEATURE ID: 18] second voice inputsgestures, messages, data, comments[FEATURE ID: 18] relative proximities
1 . An apparatus applicable [FEATURE ID: 1]

for a user [FEATURE ID: 2]

to consume [TRANSITIVE ID: 3]

materials , the apparatus comprising [TRANSITIVE ID: 4]

: a controller ; a wireless communication component ; and a storage medium [FEATURE ID: 5]

to store [TRANSITIVE ID: 3]

at least a first piece [FEATURE ID: 6]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 7]

including [TRANSITIVE ID: 8]

a first piece of text [FEATURE ID: 9]

, an audio sub file including a first piece of audio [FEATURE ID: 10]

, and a logic sub file [FEATURE ID: 5]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 5]

configured [TRANSITIVE ID: 11]

to be wirelessly received [TRANSITIVE ID: 12]

by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored [TRANSITIVE ID: 11]

for the first piece of audio , wherein at least using [TRANSITIVE ID: 4]

instructions [FEATURE ID: 13]

in the first piece of instructions , the controller is configured to : analyze [TRANSITIVE ID: 3]

, using at least the first limited voice recognition dictionary , data based on first voice inputs [FEATURE ID: 14]

received via at least a microphone that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 15]

; and identify an area in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response to present [FEATURE ID: 3]

to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 7]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 13]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 16]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image , and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 5]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset . 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor at least for images [FEATURE ID: 17]

, wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment [FEATURE ID: 16]

around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 18]

1 . A method of enabling context [FEATURE ID: 15]

- specific exchanges between multiple users of a document [FEATURE ID: 7]

, comprising [TRANSITIVE ID: 4]

: ( a ) providing [TRANSITIVE ID: 8]

a single document [FEATURE ID: 5]

, defined [TRANSITIVE ID: 11]

as text [FEATURE ID: 17]

, video [FEATURE ID: 10]

, audio [FEATURE ID: 9]

, binary file [FEATURE ID: 9]

, or any combination [FEATURE ID: 6]

thereof , for review by multiple users on separate computers , mobile devices or networked systems ( b ) providing software [FEATURE ID: 1]

which uses the document as a virtual space [FEATURE ID: 5]

which is defined by its beginning , end , elements and sub-elements , in which user actions , relative proximities [FEATURE ID: 18]

, and positions [FEATURE ID: 14]

are tracked [TRANSITIVE ID: 12]

and transmitted [TRANSITIVE ID: 12]

so that ( 1 ) filtering [FEATURE ID: 17]

of conversation [FEATURE ID: 13]

and the comments of other users according to a person ' s location [FEATURE ID: 16]

in the document can be made [TRANSITIVE ID: 11]

, and ( 2 ) filtering of conversation and the comments of other users according to a person ' s range [FEATURE ID: 16]

or proximity preference [FEATURE ID: 16]

can be made , and ( 3 ) grouping by the exclusion or inclusion of other users based on certain other preferences can be made whereby a user [FEATURE ID: 2]

can view and converse [FEATURE ID: 3]

, with proximity filter settings [FEATURE ID: 14]

, as well as comment [FEATURE ID: 3]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20080298083A1
Filed: 2007-02-07
Issued: 2008-12-04
Patent Holder: (Original Assignee) Plastic Logic Ltd; FlexEnable Ltd     (Current Assignee) Plastic Logic Ltd ; FlexEnable Ltd
Inventor(s): Ben Watson, Nick Sandham, David Fisher, Duncan Barclay, Simon Jones, Carl Hayton, Anusha Nirmalananthan

Title: Electronic reading devices

[FEATURE ID: 1] apparatus applicable, apparatusappliance, article, interface, electronic, device, system, information[FEATURE ID: 1] electronic device, electronic book, electronic document
[FEATURE ID: 2] materials, text, first voice inputs, second voice inputsdata, audio, instructions, messages, signals, media, words[FEATURE ID: 2] information
[TRANSITIVE ID: 3] comprising, includinghaving, includes, of, comprises, containing, defining, with[TRANSITIVE ID: 3] including, has
[FEATURE ID: 4] controller, wireless communication component, storage medium, microphonehousing, display, component, substrate, support, memory, computer[FEATURE ID: 4] front-most surface, light source, central support, device, system, spine
[TRANSITIVE ID: 5] store, presentissue, deliver, output, supply, comprise, generate, send[TRANSITIVE ID: 5] provide
[FEATURE ID: 6] first piece, second pieceportion, second, set, number, third, plurality, reference piece[FEATURE ID: 6] side
[FEATURE ID: 7] audio sub file, areaoutput, object, element, input, application, attachment, item[FEATURE ID: 7] optical system
[TRANSITIVE ID: 8] configured, tailored, ableadapted, operable, positioned, coupled, used, arranged, designed[TRANSITIVE ID: 8] configured, able
[TRANSITIVE ID: 9] usingof, from, with, use, the[TRANSITIVE ID: 9] front
[FEATURE ID: 10] leastlest, or, most, at least[FEATURE ID: 10] least
[FEATURE ID: 11] claimany, paragraph, preceding claim, item, clause, figure, of claim[FEATURE ID: 11] claim
[FEATURE ID: 12] interest, imageappearance, area, identification, identity, identifier, event, interface[FEATURE ID: 12] active matrix organic electronic backplane
[FEATURE ID: 13] sensorsource, configured, device, display[FEATURE ID: 13] light
[FEATURE ID: 14] imagesimaging, sensing, vision, scanning[FEATURE ID: 14] viewing
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume materials [FEATURE ID: 2]

, the apparatus [FEATURE ID: 1]

comprising [TRANSITIVE ID: 3]

: a controller [FEATURE ID: 4]

; a wireless communication component [FEATURE ID: 4]

; and a storage medium [FEATURE ID: 4]

to store [TRANSITIVE ID: 5]

at least a first piece [FEATURE ID: 6]

of materials , with the first piece of materials comprising a text sub file including [TRANSITIVE ID: 3]

a first piece of text [FEATURE ID: 2]

, an audio sub file [FEATURE ID: 7]

including a first piece of audio , and a logic sub file including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary configured [TRANSITIVE ID: 8]

to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored [TRANSITIVE ID: 8]

for the first piece of audio , wherein at least using [TRANSITIVE ID: 9]

instructions in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data based on first voice inputs [FEATURE ID: 2]

received via at least a microphone [FEATURE ID: 4]

that is configured to be coupled at least [FEATURE ID: 10]

to the controller , to at least identify a query ; and identify an area [FEATURE ID: 7]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response to present [FEATURE ID: 5]

to the user , with a second piece [FEATURE ID: 6]

of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 11]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 12]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 12]

, and wherein to identify the area to generate the response depends on a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset . 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 13]

at least for images [FEATURE ID: 14]

, wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able [FEATURE ID: 8]

to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 2]

1 . An electronic device [FEATURE ID: 1]

including [TRANSITIVE ID: 3]

an electroactive display and a light [FEATURE ID: 13]

to illuminate said display , wherein said display has [TRANSITIVE ID: 3]

a viewing surface , and wherein said light is configured [TRANSITIVE ID: 8]

to illuminate said display from in front [FEATURE ID: 9]

and to one side [FEATURE ID: 6]

of an edge of said viewing [TRANSITIVE ID: 14]

surface , across said viewing surface and through a front-most surface [FEATURE ID: 4]

of said display . 2 . An electronic device as claimed in claim [FEATURE ID: 11]

1 wherein said viewing surface has a concave curvature . 3 . An electronic device as claimed in claim 1 wherein said light is configured to provide [TRANSITIVE ID: 5]

a stripe of illumination along substantially a complete edge of said viewing surface and directed across said viewing surface . 4 . An electronic device as claimed in claim 3 wherein said light comprises a plurality of LEDs . 5 . An electronic device as claimed in claim 1 wherein said light comprises a light source [FEATURE ID: 4]

at least [FEATURE ID: 10]

partially behind said edge of said viewing surface of said display and an optical system [FEATURE ID: 7]

to direct illumination from said light source onto said viewing surface from the front of said edge of said viewing surface . 6 . An electronic device as claimed in claim 1 including two said electroactive displays mounted on a central support [FEATURE ID: 4]

, and wherein said light is mounted within said central support and configured to be able [FEATURE ID: 8]

to illuminate each of said electroactive displays . 7 . An electronic device as claimed in claim 1 wherein said light is configured to provide substantially uniform illumination over a majority of said viewing surface . 8 . An electronic device as claimed in claim 1 wherein said light is capable of changing colour , and wherein said device [FEATURE ID: 4]

includes a system [FEATURE ID: 4]

to control a colour of said light in coordination with information [FEATURE ID: 2]

displayed on said device . 9 . An electronic device as claimed in claim 1 wherein said device is an electronic book [FEATURE ID: 1]

. 10 . An electronic device as claimed in claim 1 wherein said electroactive display has an active matrix organic electronic backplane [FEATURE ID: 12]

. 11 . An electronic device as claimed in claim 1 , wherein said electroactive display comprises an electrophoretic display . 12 . An electronic document [FEATURE ID: 1]

reading device comprising at least one page attached to a spine [FEATURE ID: 4]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20080268416A1
Filed: 2007-04-23
Issued: 2008-10-30
Patent Holder: (Original Assignee) Wallace Michael W; Phillip Trevor Odom     
Inventor(s): Michael W. Wallace, Phillip Trevor Odom

Title: Apparatus and methods for an interactive electronic book system

[FEATURE ID: 1] apparatus applicable, apparatus, audio sub file, image, environmentarticle, item, application, object, interface, device, system[FEATURE ID: 1] interactive electronic book system
[FEATURE ID: 2] materialsinstructions, material, notes, text, music, images, indicia[FEATURE ID: 2] visual content, audio content
[TRANSITIVE ID: 3] comprising, includinghaving, includes, containing, with, of, and, providing[TRANSITIVE ID: 3] comprising, including
[FEATURE ID: 4] controller, second limited voice recognition dictionary, sensormemory, display, database, battery, keyboard, cpu, processor[FEATURE ID: 4] speaker, volatile memory, microprocessor
[FEATURE ID: 5] wireless communication componentprocessor, memory, device, controller[FEATURE ID: 5] digital computer
[FEATURE ID: 6] storage medium, text sub file, headsetdevice, text, container, database, document, tablet, housing[FEATURE ID: 6] book
[TRANSITIVE ID: 7] storehouse, contain, have, carry, provide[TRANSITIVE ID: 7] includes
[FEATURE ID: 8] text, datacontent, instructions, metadata, parameters, material, information, and[FEATURE ID: 8] calibration data
[FEATURE ID: 9] audioinformation, instructions, material, metadata[FEATURE ID: 9] temperature compensation data
[FEATURE ID: 10] instructions executable, instructions, materials otherdata, information, commands, logic, codes, software, programs[FEATURE ID: 10] software instructions
[TRANSITIVE ID: 11] receivedcaptured, collected, acquired, read[TRANSITIVE ID: 11] detected
[TRANSITIVE ID: 12] usingprocessing, performing, implementing, utilizing, making[TRANSITIVE ID: 12] operating
[FEATURE ID: 13] first voice inputs, images, second voice inputsdata, information, outputs, recordings, presence, samples, audio[FEATURE ID: 13] cumulative magnetic field
[FEATURE ID: 14] microphoneswitch, transducer, microprocessor, module, device[FEATURE ID: 14] digital converter
[FEATURE ID: 15] response, limited image dictionaryresult, signal, message, representation, document, query, list[FEATURE ID: 15] digital form
[FEATURE ID: 16] voice inputsdata, input, output, information[FEATURE ID: 16] temperature sensor electrical output
[FEATURE ID: 17] claimitem, statement, aspect, preceding claim, claimed, of claim, embodiment[FEATURE ID: 17] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume materials [FEATURE ID: 2]

, the apparatus [FEATURE ID: 1]

comprising [TRANSITIVE ID: 3]

: a controller [FEATURE ID: 4]

; a wireless communication component [FEATURE ID: 5]

; and a storage medium [FEATURE ID: 6]

to store [TRANSITIVE ID: 7]

at least a first piece of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 6]

including [TRANSITIVE ID: 3]

a first piece of text [FEATURE ID: 8]

, an audio sub file [FEATURE ID: 1]

including a first piece of audio [FEATURE ID: 9]

, and a logic sub file including a first piece of instructions executable [FEATURE ID: 10]

by at least the controller , with the first piece of materials and a first limited voice recognition dictionary configured to be wirelessly received [TRANSITIVE ID: 11]

by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 12]

instructions [FEATURE ID: 10]

in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 8]

based on first voice inputs [FEATURE ID: 13]

received via at least a microphone [FEATURE ID: 14]

that is configured to be coupled at least to the controller , to at least identify a query ; and identify an area in the first piece of materials other [FEATURE ID: 10]

than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 15]

to present to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 4]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 16]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 17]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 1]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 15]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 6]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 4]

at least for images [FEATURE ID: 13]

, wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment [FEATURE ID: 1]

around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 13]

1 . An interactive electronic book system [FEATURE ID: 1]

, comprising [TRANSITIVE ID: 3]

a book [FEATURE ID: 6]

including [TRANSITIVE ID: 3]

a front cover , a back cover , and a plurality of pages including visual content [FEATURE ID: 2]

, wherein said front cover and each of said plurality of pages further includes [TRANSITIVE ID: 7]

a pagination magnet , each of said pagination magnets aligned with the other said magnets so as to overlay one another when said front cover and plurality of pages is closed ; a magnetic sensor in close proximity to said back cover and aligned with said pagination magnets , wherein said magnetic sensor produces an electrical output related to the cumulative magnetic field [FEATURE ID: 13]

from said pagination magnets detected [TRANSITIVE ID: 11]

by said magnetic sensor ; a speaker [FEATURE ID: 4]

; and a digital computer [FEATURE ID: 5]

in electronic communication with at least said magnetic sensor and said speaker , said digital computer including non-volatile memory , volatile memory [FEATURE ID: 4]

, a microprocessor [FEATURE ID: 4]

, an analog - to - digital converter [FEATURE ID: 14]

for converting said electrical output of said magnetic sensor to a digital form [FEATURE ID: 15]

, software instructions [FEATURE ID: 10]

for operating [FEATURE ID: 12]

said digital computer stored in said non-volatile memory , calibration data [FEATURE ID: 8]

related to said magnetic sensor stored in said non-volatile memory , and audio content [FEATURE ID: 2]

related to each of said plurality pages of said book stored in said non-volatile memory ; and wherein said digital computer uses said electrical output of said magnetic sensor and said calibration data to determine which of said plurality of pages said book is open to , and causes said speaker to play said audio content related to said open page . 2 . An interactive electronic book system as in claim [FEATURE ID: 17]

1 , further comprising : a temperature sensor connected to said book , wherein said temperature sensor produces an electrical output related to the ambient temperature , and wherein said temperature sensor is in electronic communication with said digital computer ; temperature compensation data [FEATURE ID: 9]

relating to said magnetic sensor and said pagination magnets stored in said non-volatile memory ; and wherein , said digital computer uses said temperature sensor electrical output [FEATURE ID: 16]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7441207B2
Filed: 2004-03-18
Issued: 2008-10-21
Patent Holder: (Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC
Inventor(s): Aaron S. Filner, Jay F. McLain, Wei-Ying Ma

Title: Method and system for improved viewing and navigation of content

[FEATURE ID: 1] apparatus applicableapparatus, interface, user, device, system, application, equipment[FEATURE ID: 1] mobile device
[TRANSITIVE ID: 2] consumeaccess, edit, store, select, browse, review, read[TRANSITIVE ID: 2] display
[FEATURE ID: 3] materials, materials other, second limited voice recognition dictionarydata, material, information, audio, document, the, code[FEATURE ID: 3] content
[TRANSITIVE ID: 4] comprisingfeaturing, including, containing, of, comprises, involving, incorporating[TRANSITIVE ID: 4] having, comprising
[FEATURE ID: 5] controller, storage mediumdevice, memory, housing, circuit, processor, server, user[FEATURE ID: 5] display
[FEATURE ID: 6] wireless communication component, microphone, headset, car, sensordisplay, camera, device, server, terminal, memory, mobile[FEATURE ID: 6] mobile computing device, limited display capabilities
[FEATURE ID: 7] first piece, logic sub file, second piece, limited image dictionaryportion, first, set, plurality, number, second, document[FEATURE ID: 7] page, thumbnail, second region
[FEATURE ID: 8] text sub file, first limited voice recognition dictionarytemplate, document, file, website, message, thumbnail, spreadsheet[FEATURE ID: 8] full readable content page, navigation grid, web page
[TRANSITIVE ID: 9] includingdisplaying, receiving, providing, defining[TRANSITIVE ID: 9] dividing
[FEATURE ID: 10] audio sub fileobject, image, overlay, area[FEATURE ID: 10] marker
[TRANSITIVE ID: 11] usingprocessing, making, receiving, following[TRANSITIVE ID: 11] detecting
[FEATURE ID: 12] querylocation, prompt, user, report, result, trigger, direction[FEATURE ID: 12] request, visual indication, cursor
[FEATURE ID: 13] responsedisplay, content, thumbnail, page, window, location, marker[FEATURE ID: 13] region, content page, tooltip, limit, panelized region, regions such
[FEATURE ID: 14] voice inputs, imagesthe, identification, data, receipt, detection, recognition, input[FEATURE ID: 14] selection
[FEATURE ID: 15] claimfigure, claimed, clair, embodiment, clam, paragraph, preceding claim[FEATURE ID: 15] claim
[FEATURE ID: 16] second voice inputscommands, input, gestures, the[FEATURE ID: 16] navigation commands
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume [TRANSITIVE ID: 2]

materials [FEATURE ID: 3]

, the apparatus comprising [TRANSITIVE ID: 4]

: a controller [FEATURE ID: 5]

; a wireless communication component [FEATURE ID: 6]

; and a storage medium [FEATURE ID: 5]

to store at least a first piece [FEATURE ID: 7]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 8]

including [TRANSITIVE ID: 9]

a first piece of text , an audio sub file [FEATURE ID: 10]

including a first piece of audio , and a logic sub file [FEATURE ID: 7]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 8]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 11]

instructions in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data based on first voice inputs received via at least a microphone [FEATURE ID: 6]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 12]

; and identify an area in the first piece of materials other [FEATURE ID: 3]

than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 13]

to present to the user , with a second piece [FEATURE ID: 7]

of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 3]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 14]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 15]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image , and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 7]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 6]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car [FEATURE ID: 6]

. 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 6]

at least for images [FEATURE ID: 14]

, wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 16]

1 . In a mobile computing device [FEATURE ID: 6]

having [TRANSITIVE ID: 4]

limited display capabilities , a method for displaying a full readable content page [FEATURE ID: 8]

despite the limited display capabilities [FEATURE ID: 6]

of the mobile device [FEATURE ID: 1]

, the method comprising [TRANSITIVE ID: 4]

: dividing [TRANSITIVE ID: 9]

a page [FEATURE ID: 7]

of content [FEATURE ID: 3]

into a plurality of regions ; displaying the plurality of the regions of the page of content together as a thumbnail [FEATURE ID: 7]

and in a reduced size on a display [FEATURE ID: 5]

of a mobile computing device ; detecting [TRANSITIVE ID: 11]

a request [FEATURE ID: 12]

to display [TRANSITIVE ID: 2]

a selected one of the regions ; replacing the thumbnail on the display by displaying the selected region [FEATURE ID: 13]

in a size that is expanded relative to the reduced size of the selected region in the thumbnail ; from the displayed selected region in the expanded size , detecting a request to display a second region [FEATURE ID: 7]

of the plurality of regions of the content page [FEATURE ID: 13]

and determining which of the plurality of regions is the second region , the second region having been displayed in the thumbnail and excluded from the selected region displayed in the expanded size ; in response to after detecting the request to display the second region that is excluded from the selected region and determining which of the plurality of regions is the second region , temporarily re-displaying the thumbnail on the display , wherein the temporarily displayed thumbnail now highlights the newly selected second region when the thumbnail reappears ; and after temporarily displaying the thumbnail following selection [FEATURE ID: 14]

of the second region , displaying the second region on the display in a size that is expanded relative to the reduced size of the second region in the thumbnail . 2 . The method of claim [FEATURE ID: 15]

1 wherein dividing the content into the regions comprises providing a navigation grid [FEATURE ID: 8]

having a plurality of regions which can each be navigated to via navigation commands [FEATURE ID: 16]

. 3 . The method of claim 1 wherein dividing the content into the regions comprises panelizing the content into panelized regions . 4 . The method of claim 1 further comprising , providing a tooltip [FEATURE ID: 13]

that is based on the content of a region that is being displayed in the reduced size . 5 . The method of claim 1 wherein displaying the selected region in the expanded size comprises scaling the selected region such that its content can be viewed by scrolling in only one dimension . 6 . The method of claim 5 wherein detecting a request to display a second region comprises scrolling in a second dimension , wherein scrolling in a second dimension is indicative of a request to change the displayed region from the previously selected region to another region . 7 . The method of claim 6 further comprising , providing a visual indication [FEATURE ID: 12]

of the change of regions . 8 . The method of claim 5 wherein scrolling in the one dimension beyond a limit [FEATURE ID: 13]

in the region changes the displayed region from the previously selected region to another region . 9 . The method of claim 8 further comprising , providing a visual indication of the change of regions . 10 . The method of claim 1 further comprising , when the regions are displayed together in a reduced size , providing a cursor [FEATURE ID: 12]

that indicates which region will be selected as the selected region upon detecting the request to display one of the regions . 11 . The method of claim 10 wherein dividing the content into the regions comprises providing a navigation grid having a plurality of regions which can each be navigated to via navigation commands , and wherein the cursor is provided as a grid framing marker [FEATURE ID: 10]

. 12 . The method of claim 1 wherein dividing the content into the regions comprises panelizing the content into panelized regions , and wherein providing the cursor comprises marking a border around a panelized region [FEATURE ID: 13]

. 13 . The method of claim 1 wherein displaying the regions in the reduced size comprises scaling the regions such [FEATURE ID: 13]

that the regions can be viewed by scrolling in only one dimension . 14 . The method of claim 1 further comprising , receiving the content as a web page [FEATURE ID: 8]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20080222273A1
Filed: 2007-03-07
Issued: 2008-09-11
Patent Holder: (Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC
Inventor(s): Thyagarajan Lakshmanan, Ting-yi Yang

Title: Adaptive rendering of web pages on mobile devices using imaging technology

[FEATURE ID: 1] materialsinformation, data, media, contents, resources, items[FEATURE ID: 1] content
[TRANSITIVE ID: 2] comprising, including, usingof, with, containing, for, and, representing, includes[TRANSITIVE ID: 2] having, comprising
[FEATURE ID: 3] controlleruser, client, host, memory, device, display, component[FEATURE ID: 3] computer, client recipient, server
[FEATURE ID: 4] wireless communication component, sensordevice, camera, processor, display, component, scanner, memory[FEATURE ID: 4] browser
[FEATURE ID: 5] storage mediummemory, repository, location, file, publisher, page, user[FEATURE ID: 5] server cache, content provider
[FEATURE ID: 6] first piece, second pieceset, block, first, plurality, second, group, quantity[FEATURE ID: 6] full set, first set
[FEATURE ID: 7] text sub filedocument, form, portion, description, page[FEATURE ID: 7] image
[FEATURE ID: 8] text, audioimage, content, instructions, output, language, code, metadata[FEATURE ID: 8] data
[FEATURE ID: 9] audio sub file, interest, imagearea, identification, item, object, index, audio, identifier[FEATURE ID: 9] client instance
[FEATURE ID: 10] instructions executablelogic, operable, actionable, commands, software, accessible, execution[FEATURE ID: 10] executable instructions
[FEATURE ID: 11] first limited voice recognition dictionary, query, limited image dictionarymessage, document, response, command, signal, database, call[FEATURE ID: 11] request, value
[TRANSITIVE ID: 12] configuredreceived, implemented, deployed, enabled, used, programmed[TRANSITIVE ID: 12] executed
[FEATURE ID: 13] datainputs, instructions, conditions, information, content, and, responses[FEATURE ID: 13] property data, user input actions relative
[TRANSITIVE ID: 14] basedcaptured, present, comprised, stored, included, encoded[TRANSITIVE ID: 14] represented
[FEATURE ID: 15] microphoneuser, computer, terminal, network, controller[FEATURE ID: 15] client
[FEATURE ID: 16] areaelement, answer, action, event, object, information, address[FEATURE ID: 16] acknowledgement
[FEATURE ID: 17] responsedisplay, location, view, map, summary, rendering, content[FEATURE ID: 17] thumbnail image, representation
[FEATURE ID: 18] second limited voice recognition dictionarycommand, message, link, query, data, code, response[FEATURE ID: 18] subsequent request
[FEATURE ID: 19] claimfigure, item, clair, preceding claim, the claim, of claim, paragraph[FEATURE ID: 19] claim
1 . An apparatus applicable for a user to consume materials [FEATURE ID: 1]

, the apparatus comprising [TRANSITIVE ID: 2]

: a controller [FEATURE ID: 3]

; a wireless communication component [FEATURE ID: 4]

; and a storage medium [FEATURE ID: 5]

to store at least a first piece [FEATURE ID: 6]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 7]

including [TRANSITIVE ID: 2]

a first piece of text [FEATURE ID: 8]

, an audio sub file [FEATURE ID: 9]

including a first piece of audio [FEATURE ID: 8]

, and a logic sub file including a first piece of instructions executable [FEATURE ID: 10]

by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 11]

configured [TRANSITIVE ID: 12]

to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 2]

instructions in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 13]

based [TRANSITIVE ID: 14]

on first voice inputs received via at least a microphone [FEATURE ID: 15]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 11]

; and identify an area [FEATURE ID: 16]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 17]

to present to the user , with a second piece [FEATURE ID: 6]

of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 18]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 19]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 9]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 9]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 11]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset . 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 4]

1 . A computer [FEATURE ID: 3]

- readable medium having [TRANSITIVE ID: 2]

computer - executable instructions [FEATURE ID: 10]

, which when executed [TRANSITIVE ID: 12]

perform steps , comprising [TRANSITIVE ID: 2]

: receiving a request [FEATURE ID: 11]

for a page ; retrieving the page ; converting the page to image data and properties of elements of the page represented [TRANSITIVE ID: 14]

in the image data ; and sending the image data and properties in response to the request . 2 . The computer - readable medium of claim [FEATURE ID: 19]

1 wherein retrieving the page comprises obtaining the page from a server cache [FEATURE ID: 5]

, or requesting a page from a content provider [FEATURE ID: 5]

. 3 . The computer - readable medium of claim 1 wherein converting the page to image data comprises rendering the page with a browser [FEATURE ID: 4]

, and generating the image data from the rendered page as compressed binary serialized data [FEATURE ID: 8]

. 4 . The computer - readable medium of claim 1 having further computer - executable instructions , comprising receiving a subsequent request [FEATURE ID: 18]

for the page and a value [FEATURE ID: 11]

that identifies an client instance [FEATURE ID: 9]

of cached data corresponding to the page , and returning an acknowledgement [FEATURE ID: 16]

in response to the subsequent request indicating that the client instance or cached data is valid . 5 . The computer - readable medium of claim 1 wherein sending the image data and properties in response to the request comprises sending a full set [FEATURE ID: 6]

of data from which a thumbnail image [FEATURE ID: 17]

and one or more tiles may be generated by a client recipient [FEATURE ID: 3]

. 6 . The computer - readable medium of claim 1 wherein sending the image data and properties in response to the request comprises sending a first set [FEATURE ID: 6]

of data corresponding to a thumbnail image of the page to a client recipient , and sending at a second set of data comprising at least one tile to the client recipient . 7 . The computer - readable medium of claim 6 wherein at least part of the second set of data is sent in response to a request from a client [FEATURE ID: 15]

, or in a background operation , or in a combination of being sent in response to a request from a client and in a background operation . 8 . A computer - readable medium having computer - executable instructions , which when executed perform steps , comprising : requesting a page of content [FEATURE ID: 1]

from a server [FEATURE ID: 3]

; receiving image data corresponding to a server - rendered image [FEATURE ID: 7]

of the page , and property data [FEATURE ID: 13]

associated with elements of the page represented in the image data ; displaying a representation [FEATURE ID: 17]

of at least part of the page based on the image data ; and using the property data to convert user input actions relative [FEATURE ID: 13]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7412647B2
Filed: 2005-03-04
Issued: 2008-08-12
Patent Holder: (Original Assignee) Microsoft Corp     (Current Assignee) Microsoft Technology Licensing LLC
Inventor(s): Timothy D. Sellers, Heather L. Grantham, Joshua A. Dersch

Title: Method and system for laying out paginated content for viewing

[FEATURE ID: 1] apparatus applicableapparatus, article, device, system[FEATURE ID: 1] readable storage medium
[TRANSITIVE ID: 2] consume, store, analyze, presentaccess, provide, generate, retrieve, process, receive, identify[TRANSITIVE ID: 2] display
[FEATURE ID: 3] materials, text, instructionsinformation, data, resources, items, images, contents, media[FEATURE ID: 3] pages, content, previous pages
[FEATURE ID: 4] apparatuscontroller, device, system, method[FEATURE ID: 4] computer
[TRANSITIVE ID: 5] comprising, usingof, including, for, containing, with, from, providing[TRANSITIVE ID: 5] having, comprising, representing
[FEATURE ID: 6] controller, wireless communication component, storage medium, logic sub file, first limited voice recognition dictionary, microphone, second limited voice recognition dictionary, headset, sensordevice, database, memory, display, receiver, computer, server[FEATURE ID: 6] computing environment, document
[FEATURE ID: 7] first piece, second piecefirst, second, plurality, number, particular type, portion, types[FEATURE ID: 7] first type, second type, same type, first content type, previous page
[FEATURE ID: 8] text sub filedocument, spreadsheet, template, table, page[FEATURE ID: 8] page grid
[TRANSITIVE ID: 9] includingproviding, defining, representing, identifying, requiring, determining, causing[TRANSITIVE ID: 9] indicating, such
[FEATURE ID: 10] instructions executableoperable, actionable, encoded, accessible, software, instructions, readable[FEATURE ID: 10] executable instructions
[FEATURE ID: 11] data, based, first voice inputsinformation, content, text, results, items, speech, audio[FEATURE ID: 11] multiple pages
[FEATURE ID: 12] query, limited image dictionaryresponse, document, message, text, trigger, selection, signal[FEATURE ID: 12] request
[FEATURE ID: 13] area, interestattribute, element, answer, location, action, event, object[FEATURE ID: 13] column number
[FEATURE ID: 14] responselocation, view, display, window, list, position, document[FEATURE ID: 14] column, page, page row, row
[FEATURE ID: 15] claimclause, claim of, the claim, of claim, item, figure, paragraph[FEATURE ID: 15] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume [TRANSITIVE ID: 2]

materials [FEATURE ID: 3]

, the apparatus [FEATURE ID: 4]

comprising [TRANSITIVE ID: 5]

: a controller [FEATURE ID: 6]

; a wireless communication component [FEATURE ID: 6]

; and a storage medium [FEATURE ID: 6]

to store [TRANSITIVE ID: 2]

at least a first piece [FEATURE ID: 7]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 8]

including [TRANSITIVE ID: 9]

a first piece of text [FEATURE ID: 3]

, an audio sub file including a first piece of audio , and a logic sub file [FEATURE ID: 6]

including a first piece of instructions executable [FEATURE ID: 10]

by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 6]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 5]

instructions [FEATURE ID: 3]

in the first piece of instructions , the controller is configured to : analyze [TRANSITIVE ID: 2]

, using at least the first limited voice recognition dictionary , data [FEATURE ID: 11]

based [TRANSITIVE ID: 11]

on first voice inputs [FEATURE ID: 11]

received via at least a microphone [FEATURE ID: 6]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 12]

; and identify an area [FEATURE ID: 13]

in the first piece of materials other than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 14]

to present [FEATURE ID: 2]

to the user , with a second piece [FEATURE ID: 7]

of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 6]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 15]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 13]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image , and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 12]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 6]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car . 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 6]

1 . In a computing environment [FEATURE ID: 6]

, a method for visually distinguishing between pages [FEATURE ID: 3]

having [TRANSITIVE ID: 5]

different types of content [FEATURE ID: 3]

, the method comprising [TRANSITIVE ID: 5]

: receiving a request [FEATURE ID: 12]

to display [TRANSITIVE ID: 2]

multiple pages [FEATURE ID: 11]

of a document [FEATURE ID: 6]

, the request indicating [TRANSITIVE ID: 9]

that the multiple pages should be displayed based on an identified column number [FEATURE ID: 13]

representing [TRANSITIVE ID: 5]

at least one column [FEATURE ID: 14]

; determining that the document includes pages having a first type [FEATURE ID: 7]

of content ; determining that the document also includes a pages having a second type [FEATURE ID: 7]

of content ; and laying out a page grid [FEATURE ID: 8]

for displaying the pages having the first type of content and pages having the second type of content , including determining a page [FEATURE ID: 14]

of the document that has active focus , and creating at least one page row [FEATURE ID: 14]

in the page grid including a page row for the page that has active focus , wherein laying out the page grid further includes separating pages having the first type of content from pages having the second type of content , such [FEATURE ID: 9]

that the pages having the second type of content will always be displayed in a different row of the page grid than pages with the first type of content , such that only pages having the same type [FEATURE ID: 7]

of content are added to a same row such that a number of pages in the row [FEATURE ID: 14]

matches the column number . 2 . The method of claim [FEATURE ID: 15]

1 further comprising , determining that pages of the first content type [FEATURE ID: 7]

of the document are of the same size as each other , and laying out the page grid by selecting each page of the first content type having the same size and adding that page to a row . 3 . The method of claim 1 further comprising , determining that pages of the first content type of the document are not of the same size as each other , and establishing the page row for the page that has active focus as a pivot row , and further processing previous pages in the document to add one or more rows to the page grid that are logically before the pivot row , and processing pages in the document that are after pages in the pivot row to add one or more rows to the page grid that are logically after the pivot row . 4 . The method of claim 3 wherein the column number is greater than one , and wherein establishing the page row for the page that has active focus as the pivot row comprises adding at least one other page to the pivot row to match the column number , the at least one other page necessarily having content of the same type as that of the page that has active focus . 5 . The method of claim 4 wherein processing previous pages [FEATURE ID: 3]

in the document to add one or more rows to the page grid comprises , selecting a page before a previous page [FEATURE ID: 7]

added to a row as a selected page , determining whether adding the selected page to the row would cause the row to exceed a width determined for the pivot row , and if adding that page would exceed the width , creating a new row and adding the selected page to the new row . 6 . The method of claim 1 further comprising , determining whether a mix exists with respect to pages in the document , and determining a horizontal alignment for at least one row based on whether a mix exists . 7 . A computer [FEATURE ID: 4]

- readable storage medium [FEATURE ID: 1]

having computer - executable instructions [FEATURE ID: 10]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US7401286B1
Filed: 1993-12-02
Issued: 2008-07-15
Patent Holder: (Original Assignee) Discovery Communications LLC     (Current Assignee) Adrea LLC
Inventor(s): John S. Hendricks, Michael L. Asmussen

Title: Electronic book electronic links

[FEATURE ID: 1] apparatus applicable, area, image, environmentitem, interface, application, article, arrangement, apparatus, system[FEATURE ID: 1] electronic book system, electronic book, link
[TRANSITIVE ID: 2] consume, store, presentprovide, receive, access, use, carry, comprise, show[TRANSITIVE ID: 2] have
[FEATURE ID: 3] materials, audio, data, first voice inputs, materials other, voice inputs, second voice inputsinformation, material, voice, words, the, sounds, speech[FEATURE ID: 3] actual text
[TRANSITIVE ID: 4] comprising, includinghaving, includes, containing, of, comprises, featuring, composing[TRANSITIVE ID: 4] comprising, including
[FEATURE ID: 5] controller, storage medium, microphone, second limited voice recognition dictionary, headset, car, sensordevice, memory, server, display, terminal, camera, database[FEATURE ID: 5] computer, menu system
[FEATURE ID: 6] wireless communication componentdevice, controller, processor, keypad, network, memory, keyboard[FEATURE ID: 6] readable medium, select button
[FEATURE ID: 7] first pieceset, block, portion, piece[FEATURE ID: 7] component
[FEATURE ID: 8] text sub file, logic sub file, first limited voice recognition dictionary, query, limited image dictionarydocument, text, database, template, message, library, table[FEATURE ID: 8] link type menu, page
[FEATURE ID: 9] texttexts, content, character, words[FEATURE ID: 9] other fonts
[FEATURE ID: 10] audio sub file, interestimage, index, area, orientation, appearance, attachment, overlay[FEATURE ID: 10] underlined typeface
[TRANSITIVE ID: 11] configured, basedprovided, stored, implemented, included, received, carried, disposed[TRANSITIVE ID: 11] embodied
[TRANSITIVE ID: 12] usingprocessing, implementing, use, employing, utilizing[TRANSITIVE ID: 12] using
[FEATURE ID: 13] instructionscommands, information, tools, hints, procedures, parameters[FEATURE ID: 13] instructions
[FEATURE ID: 14] responsemenu, display, link, content, list, portion, page[FEATURE ID: 14] link type, cursor
[FEATURE ID: 15] claimparagraph, of claim, statement, claim of, claimed, clair, preceding claim[FEATURE ID: 15] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume [TRANSITIVE ID: 2]

materials [FEATURE ID: 3]

, the apparatus comprising [TRANSITIVE ID: 4]

: a controller [FEATURE ID: 5]

; a wireless communication component [FEATURE ID: 6]

; and a storage medium [FEATURE ID: 5]

to store [TRANSITIVE ID: 2]

at least a first piece [FEATURE ID: 7]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 8]

including [TRANSITIVE ID: 4]

a first piece of text [FEATURE ID: 9]

, an audio sub file [FEATURE ID: 10]

including a first piece of audio [FEATURE ID: 3]

, and a logic sub file [FEATURE ID: 8]

including a first piece of instructions executable by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 8]

configured [TRANSITIVE ID: 11]

to be wirelessly received by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 12]

instructions [FEATURE ID: 13]

in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 3]

based [TRANSITIVE ID: 11]

on first voice inputs [FEATURE ID: 3]

received via at least a microphone [FEATURE ID: 5]

that is configured to be coupled at least to the controller , to at least identify a query [FEATURE ID: 8]

; and identify an area [FEATURE ID: 1]

in the first piece of materials other [FEATURE ID: 3]

than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 14]

to present [FEATURE ID: 2]

to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 5]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 3]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 15]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 10]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 1]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 8]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 5]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car [FEATURE ID: 5]

. 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 5]

at least for images , wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment [FEATURE ID: 1]

around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 3]

1 . An electronic book system [FEATURE ID: 1]

tangibly embodied [TRANSITIVE ID: 11]

on a computer [FEATURE ID: 5]

- readable medium [FEATURE ID: 6]

, comprising [TRANSITIVE ID: 4]

: an electronic book [FEATURE ID: 1]

; and a menu system [FEATURE ID: 5]

, the menu system including [TRANSITIVE ID: 4]

: a help menu , wherein the help menu provides instructions [FEATURE ID: 13]

for using [TRANSITIVE ID: 12]

the menu system ; and a show links menu , wherein when selected , the show links menu displays a link type menu [FEATURE ID: 8]

that includes audio clip links , graphics file links , definition links , language translation links , book order links , book review links , related discussion group links , pronunciation links , data base links , other book links , and book selection links . 2 . The system of claim [FEATURE ID: 15]

1 , wherein each component [FEATURE ID: 7]

of actual text [FEATURE ID: 3]

on a page [FEATURE ID: 8]

of the electronic book may have [TRANSITIVE ID: 2]

one or more links to additional components . 3 . The system of claim 2 , wherein a desired link type [FEATURE ID: 14]

is selected by highlighting the desired link type from the link type menu with a cursor [FEATURE ID: 14]

and operating a select button [FEATURE ID: 6]

, and wherein when the desired link type is selected , all links of the selected type that exist on a displayed page of the electronic book are highlighted . 4 . The system of claim 3 , wherein the links of the selected type are highlighted in a color that is different from other colors the displayed page of the electronic book . 5 . The system of claim 3 , wherein the links of the selected link type are displayed in a font that is different from other fonts [FEATURE ID: 9]

on the displayed page of the electronic book . 6 . The system of claim 3 , wherein the links of the selected link type are highlighted by displaying the links in one of a bold typeface , an italics typeface , and an underlined typeface [FEATURE ID: 10]

. 7 . The system of claim 3 , wherein a desired link [FEATURE ID: 1]








Targeted Patent:

Patent: US11416668B2
Filed: 2009-10-14
Issued: 2022-08-16
Patent Holder: (Original Assignee) Iplcontent LLC     (Current Assignee) Iplcontent LLC
Inventor(s): Chi Fai Ho, Peter P. Tong

Title: Method and apparatus applicable for voice recognition with limited dictionary

 
Cross Reference / Shared Meaning between the Lines
Charted Against:

Patent: US20080168073A1
Filed: 2005-01-19
Issued: 2008-07-10
Patent Holder: (Original Assignee) Amazon Technologies Inc     (Current Assignee) Amazon Technologies Inc
Inventor(s): Hilliard B. Siegel, Thomas A. Ryan, Robert L. Goodwin, John Lattyak

Title: Providing Annotations of a Digital Work

[FEATURE ID: 1] apparatus applicable, area, voice inputs, interest, image, environmentevent, interface, item, application, information, entity, identification[FEATURE ID: 1] eBook reader device, annotation, invariant location reference identifier, other eBook reader device, authorization credential
[FEATURE ID: 2] materials, instructions, data, first voice inputs, second voice inputsinformation, audio, commands, words, and, signals, voice[FEATURE ID: 2] content, readable media
[TRANSITIVE ID: 3] comprising, including, usinghaving, of, includes, with, being, from, implementing[TRANSITIVE ID: 3] comprising
[FEATURE ID: 4] controller, wireless communication component, first limited voice recognition dictionary, microphone, headset, car, sensordevice, server, display, memory, camera, terminal, processor[FEATURE ID: 4] remote data store, eBook reader, location reference identifiers, computer
[FEATURE ID: 5] storage medium, audio, logic sub file, query, response, second limited voice recognition dictionary, limited image dictionarydocument, database, content, text, location, message, request[FEATURE ID: 5] method, digital work
[FEATURE ID: 6] first piecefile, block, part, portion, piece[FEATURE ID: 6] segment
[FEATURE ID: 7] text sub filedocument, metadata, markup, spreadsheet, page, xml, hyperlink[FEATURE ID: 7] data file, graphical format, textual format, audio format
[FEATURE ID: 8] text, materials othermedia, logic, material, data, the, software, content[FEATURE ID: 8] local memory
[FEATURE ID: 9] audio sub fileattachment, image, output, applet, item, overlay, object[FEATURE ID: 9] index file separate
[FEATURE ID: 10] instructions executableusable, software, encoded, operable, translatable, readable, logic[FEATURE ID: 10] executable instructions
[TRANSITIVE ID: 11] received, based, differentobtained, provided, generated, output, read, retrieved, captured[TRANSITIVE ID: 11] implemented, stored, accessible, separate
[FEATURE ID: 12] leastoutput, and, or, input[FEATURE ID: 12] regard
[FEATURE ID: 13] claimany, figure, claim of, preceding claim, the claim, of claim, item[FEATURE ID: 13] claim
1 . An apparatus applicable [FEATURE ID: 1]

for a user to consume materials [FEATURE ID: 2]

, the apparatus comprising [TRANSITIVE ID: 3]

: a controller [FEATURE ID: 4]

; a wireless communication component [FEATURE ID: 4]

; and a storage medium [FEATURE ID: 5]

to store at least a first piece [FEATURE ID: 6]

of materials , with the first piece of materials comprising a text sub file [FEATURE ID: 7]

including [TRANSITIVE ID: 3]

a first piece of text [FEATURE ID: 8]

, an audio sub file [FEATURE ID: 9]

including a first piece of audio [FEATURE ID: 5]

, and a logic sub file [FEATURE ID: 5]

including a first piece of instructions executable [FEATURE ID: 10]

by at least the controller , with the first piece of materials and a first limited voice recognition dictionary [FEATURE ID: 4]

configured to be wirelessly received [TRANSITIVE ID: 11]

by the apparatus via at least the wireless communication component , with the first limited voice recognition dictionary at least tailored for the first piece of audio , wherein at least using [TRANSITIVE ID: 3]

instructions [FEATURE ID: 2]

in the first piece of instructions , the controller is configured to : analyze , using at least the first limited voice recognition dictionary , data [FEATURE ID: 2]

based [TRANSITIVE ID: 11]

on first voice inputs [FEATURE ID: 2]

received via at least a microphone [FEATURE ID: 4]

that is configured to be coupled at least [FEATURE ID: 12]

to the controller , to at least identify a query [FEATURE ID: 5]

; and identify an area [FEATURE ID: 1]

in the first piece of materials other [FEATURE ID: 8]

than in the logic sub file , based on at least the query , to generate a response [FEATURE ID: 5]

to present to the user , with a second piece of materials including a second piece of audio and a second piece of instructions executable by at least the controller , with the second piece of materials and a second limited voice recognition dictionary [FEATURE ID: 5]

configured to be wirelessly received by the apparatus via at least the wireless communication component , with the second limited voice recognition dictionary and the first limited voice recognition dictionary both in the apparatus , but the second limited voice recognition dictionary being different [FEATURE ID: 11]

from the first limited voice recognition dictionary , with the second limited voice recognition dictionary at least tailored for the second piece of audio , and with at least the controller configured to using at least instructions in the second piece of instructions and the second limited voice recognition dictionary to identify , based on voice inputs [FEATURE ID: 1]

via at least the microphone , materials in the second piece of materials . 2 . An apparatus as recited in claim [FEATURE ID: 13]

1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable determining at least an interest [FEATURE ID: 1]

of the user in the first piece of materials and identify materials in the first piece of materials at least based on the interest , to present to the user . 3 . An apparatus as recited in claim 1 , wherein at least using instructions in the first piece of instructions , the controller is configured to at least enable searching for materials in the first piece of materials to present to the user . 4 . An apparatus as recited in claim 1 , wherein the area in the first piece of materials includes an image [FEATURE ID: 1]

, and wherein to identify the area to generate the response depends on a limited image dictionary [FEATURE ID: 5]

configured to be wirelessly received by the apparatus via at least the wireless communication component . 5 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented as a headset [FEATURE ID: 4]

. 6 . An apparatus as recited in claim 1 , wherein the apparatus is configured to be implemented in a car [FEATURE ID: 4]

. 7 . An apparatus as recited in claim 1 , wherein the apparatus comprises a sensor [FEATURE ID: 4]

at least for images , wherein the controller is configured to recognize at least an image sensed by the sensor , and wherein the image includes an image of the user or of an environment [FEATURE ID: 1]

around the apparatus . 8 . An apparatus as recited in claim 7 , wherein at least using instructions in the first piece of instructions , the controller is configured to recognize at least the image using at least a limited image dictionary configured to be wirelessly received by the apparatus via at least the wireless communication component , wherein the limited image dictionary is at least tailored for images in the first piece of materials , and wherein at least another image is not able to be recognized by the apparatus using the limited image dictionary , but able to be recognized using another limited image dictionary . 9 . An apparatus as recited in claim 8 , wherein at least using instructions in the first piece of instructions , the controller is configured to : analyze data based on second voice inputs [FEATURE ID: 2]

1 . A method [FEATURE ID: 5]

of annotating a digital work [FEATURE ID: 5]

implemented [TRANSITIVE ID: 11]

at least partially by an eBook reader device [FEATURE ID: 1]

, the method comprising [TRANSITIVE ID: 3]

: receiving an annotation [FEATURE ID: 1]

relating to a specified portion of the digital work ; appending an invariant location reference identifier [FEATURE ID: 1]

corresponding to the specified portion of the digital work to the annotation ; and storing the annotation in association with the digital work . 2 . The method of claim [FEATURE ID: 13]

1 , wherein the annotation is stored [TRANSITIVE ID: 11]

to a remote data store [FEATURE ID: 4]

. 3 . The method of claim 2 , wherein the annotation stored at the remote data store is accessible [FEATURE ID: 11]

by at least one other eBook reader device [FEATURE ID: 1]

. 4 . The method of claim 2 , wherein the annotation stored at the remote data store requires an authorization credential [FEATURE ID: 1]

for access . 5 . The method of claim 1 , wherein the annotation is stored to local memory [FEATURE ID: 8]

of the eBook reader device . 6 . The method of claim 5 , further comprising transmitting the digital work and the annotation to a remote data store . 7 . The method of claim 5 , further comprising synchronizing content [FEATURE ID: 2]

stored on the eBook reader [FEATURE ID: 4]

with a remote data store . 8 . The method of claim 7 , wherein the synchronization is performed periodically . 9 . The method of claim 7 , wherein the synchronization is performed in response to a change in content stored on the eBook reader device . 10 . The method of claim 1 , further comprising transmitting the annotation to another eBook reader device . 11 . The method of claim 1 , wherein the digital work is partitioned into a plurality of segments , and wherein each segment [FEATURE ID: 6]

of the digital work has an invariant location reference identifier assigned thereto , such that each invariant location reference identifier is uniquely assigned with a corresponding segment of the digital work , regardless of display conditions under which the digital work is displayed . 12 . The method of claim 11 , wherein the invariant location reference identifiers are separate [FEATURE ID: 11]

from the digital work , such that the digital work is unaltered by the location reference identifiers [FEATURE ID: 4]

. 13 . The method of claim 1 , wherein the location reference identifiers are stored in an index file separate [FEATURE ID: 9]

from the digital work . 14 . The method of claim 1 , wherein the location reference identifiers are embedded in a data file [FEATURE ID: 7]

of the digital work . 15 . One or more computer [FEATURE ID: 4]

- readable media [FEATURE ID: 2]

comprising computer - executable instructions [FEATURE ID: 10]

for implementing the method of claim 1 . 16 . A method of presenting an annotation on an eBook reader device , the method comprising : receiving an annotation of a digital work ; storing the annotation in association with the digital work ; receiving an authorization credential granting access to the annotation ; and if the authorization credential is valid , presenting the annotation of the digital work on the eBook reader device in context with regard [FEATURE ID: 12]

to the digital work . 17 . The method of claim 16 , wherein the annotation of the digital work is received in one or more of the following formats : graphical format [FEATURE ID: 7]

, textual format [FEATURE ID: 7]

, audio format [FEATURE ID: 7]