Coverart for item
The Resource Hybrid Approaches to Machine Translation

Hybrid Approaches to Machine Translation

Label
Hybrid Approaches to Machine Translation
Title
Hybrid Approaches to Machine Translation
Creator
Contributor
Subject
Language
eng
Member of
Cataloging source
MiAaPQ
Literary form
non fiction
Nature of contents
dictionaries
Series statement
Theory and Applications of Natural Language Processing
Hybrid Approaches to Machine Translation
Label
Hybrid Approaches to Machine Translation
Link
http://libproxy.rpi.edu/login?url=https://ebookcentral.proquest.com/lib/rpi/detail.action?docID=4591910
Publication
Copyright
Related Contributor
Related Location
Related Agents
Related Authorities
Related Subjects
Related Items
Carrier category
online resource
Carrier category code
cr
Carrier MARC source
rdacarrier
Color
multicolored
Content category
text
Content type code
txt
Content type MARC source
rdacontent
Contents
  • Foreword -- Preface -- Contents -- Hybrid Machine Translation Overview -- 1 Introduction -- 2 Machine Translation Paradigms -- 2.1 Rule-Based Machine Translation -- 2.1.1 Transfer Systems -- 2.2 Statistical Machine Translation -- 2.2.1 Phrase-Based Systems -- 2.2.2 Syntax-Based Systems -- 2.2.3 Hierarchical Phrase-Based Systems -- 3 System Combination -- 3.1 Sentence-Level Combination -- 3.2 Subsentencial-Level Combination -- 3.3 Search Graph-Level Combination -- 4 Hybridisation Lead by an SMT System -- 4.1 Pre/Post-processing Integration -- 4.2 Core Integration -- 5 Hybridisation Lead by a RBMT System -- 5.1 Enriching Rule-Based Resources -- 5.2 Including Data-Based Modules -- 5.3 Using Rule-Based Translations as a Backbone -- 6 Overview of the Book and Future Research Directions -- 6.1 Overview of the Book -- 6.1.1 Part I: Adding Linguistic Knowledge in SMT -- 6.1.2 Part II: Using Machine Learning in MT -- 6.1.3 Part III: Hybrid NLP Tools Useful for MT -- 6.2 Future Research Directions -- References -- Part I Adding Linguistics into SMT -- Controlled Ascent: Imbuing Statistical MT with LinguisticKnowledge -- 1 Introduction -- 2 Logical Form Translation -- 2.1 Details of the LF-Based System -- 2.2 Results and Lessons Learned -- 3 The Next Generation MSR MT Systems -- 3.1 Like and DontLike -- 3.2 Linguistic Component Accuracy -- 4 Evaluation -- 4.1 Fact or Fiction: BLEU is Biased Against Rule-Based or Linguistically-Informed Systems? -- 4.2 Treelet Penalty Experiments -- 4.3 Interaction Between Decoder Type and Sentence Length -- 4.4 Treelet Penalty and Noisy Data -- 5 The Data Gap -- 5.1 No Parallel Data, No Problem!! -- 6 Conclusions and Future Directions -- References -- Hybrid Word Alignment -- 1 Introduction -- 2 Related Works -- 3 Hybrid Word Alignment Model -- 3.1 Word Alignment Using GIZA++ -- 3.2 Word Alignment Using Berkley Aligner
  • 3.3 Rule Based Word Alignment -- 3.3.1 Automatic Alignments of NEs Through Transliteration -- 3.3.2 Automatic Chunk Alignment -- 3.4 Hybrid Word Alignments Model -- 3.4.1 Union -- 3.4.2 ADD Additional Alignments -- 3.5 Berkeley Semi-Supervised Alignment -- 4 Tools and Resources Used -- 5 Experiments and Results -- 6 Conclusions and Future Work -- References -- Syntax-Based Pre-reordering for Chinese-to-Japanese Statistical Machine Translation -- 1 Introduction -- 2 Background -- 2.1 Chinese Parsing -- 2.2 Related Research -- 3 Head Finalization for Chinese (HFC) -- 4 Unlabeled Dependency Parsing Based Pre-reordering for Chinese (DPC) -- 5 Evaluation -- 5.1 Experimental Conditions -- 5.2 Results -- 5.3 Effects of Parse Errors -- 6 Discussion and Future Research -- 7 Conclusion -- Appendix: Summary of Part-of-Speech Tag Set in Penn Chinese Treebank -- References -- Part II Using Machine Learning in MT -- Machine Learning Applied to Rule-Based Machine Translation -- 1 Introduction -- 2 SQUOIA Spanish to Quechua MT System -- 3 Subordinated Quechua Verb Forms -- 3.1 Switch-Reference -- 3.2 Other Types of Subordination -- 4 Verb Form Disambiguation with Machine Learning -- 4.1 Training Data -- 4.2 Features -- 4.3 Classification -- 4.4 RBMT System with SVM Verb Disambiguation -- 4.5 Evaluation -- 4.5.1 Whole Verb Disambiguation Pipeline -- 4.5.2 Additional Verb Disambiguation Module -- 5 Relative Clauses -- 5.1 Quechua Relativization -- 5.2 Relative Clause Disambiguation with Machine Learning -- 5.3 Training Data -- 5.4 Features -- 5.5 Evaluation -- 6 Conclusions -- References -- Language-Independent Hybrid MT: Comparative Evaluation of Translation Quality -- 1 Introduction -- 2 Description of the PRESEMT Methodology -- 3 Processing the Parallel Corpus -- 3.1 Aligning the SL and TL Tokens -- 3.2 Phrasing the Input Text -- 4 Main Translation Engine
  • 5 Structure Selection Module (SSM) -- 5.1 Calculating Similarity Using a Dynamic Programming Algorithm -- 5.2 Structural Similarity Example -- 6 Translation Equivalent Selection Module (TES) -- 6.1 Description of the Phrase Model -- 6.2 Applying the Phrase Model to the Tasks of the Translation Equivalent Selection Module -- 6.3 Example of Translation Equivalent Selection -- 7 Evaluation of the PRESEMT MT System -- 7.1 Dataset -- 7.2 Evaluation Results -- 7.3 Comparison to Other MT Systems -- 8 Future Extensions and Potential Improvements on PRESEMT -- References -- Part III Hybrid NLP Tools Useful for MT -- Creating Hybrid Dependency Parsers for Syntax-Based MT -- 1 Introduction -- 2 Background and Related Work -- 2.1 Dependency Parsing -- 2.1.1 Parsers -- 2.2 Hybrid Dependency Parsing -- 2.2.1 Parsing Data Sets -- 2.2.2 Parsing Metrics -- 2.3 Deep Transfer Syntax-Based MT -- 2.3.1 MT Data Sets -- 2.3.2 MT Metrics -- 3 Hybrid Dependency Parsers -- 3.1 Minimum Spanning Tree Combination -- 3.1.1 Parsers -- 3.1.2 Weighting Schemes for Parsing Combination -- 3.1.3 Results -- 3.1.4 Dependency Errors Per POS Tag -- 3.2 Fuzzy Clustering -- 3.2.1 Weighting Schemes for Clustering -- 3.2.2 Determining Part-of-Speech Clustering Weights -- 3.2.3 Fuzzy Clustering Results -- 3.2.4 POS Error Reduction -- 3.3 Model Classification -- 3.3.1 Process Flow -- 3.3.2 Parsers -- 3.3.3 Ensemble SVM System -- 3.3.4 Evaluation -- 3.3.5 Results and Discussion -- 4 MT with a Hybrid Parsing Approach -- 4.1 Data Sets -- 4.1.1 Evaluation Set -- 4.1.2 Training Set -- 4.2 Translation Components -- 4.3 Evaluation -- 4.4 Results and Discussion -- 4.4.1 Type of Changes in WMT Annotation -- 4.4.2 Parsers vs Our Gold Standard -- 4.5 MT Results in WMT Using Hybrid Parsing Approaches -- 4.5.1 Human Manual Evaluation: SVM vs the Baseline System -- 4.5.2 MT Results with Gold Data
  • 5 Conclusion -- References -- Using WordNet-Based Word Sense Disambiguation to Improve MT Performance -- 1 Introduction -- 2 Word Sense Disambiguation and Machine Translation -- 3 Experimental Setup -- 3.1 Corpus and MT Systems -- 3.2 Disambiguation with UKB and WordNet -- 3.3 Disambiguation with Sense Clusters -- 4 Evaluation -- 4.1 Manual Evaluation of WSD Precision in the Context of MT -- 4.2 Agreement Between Each of the MT Systems and the Disambiguated Equivalent -- 4.3 Evaluation with Metrics -- 5 Discussion -- 6 Conclusion -- References
http://library.link/vocab/cover_art
https://contentcafe2.btol.com/ContentCafe/Jacket.aspx?Return=1&Type=S&Value=9783319213118&userID=ebsco-test&password=ebsco-test
Dimensions
unknown
http://library.link/vocab/discovery_link
{'f': 'http://opac.lib.rpi.edu/record=b4385672'}
Extent
1 online resource (208 pages)
Form of item
online
Isbn
9783319213118
Media category
computer
Media MARC source
rdamedia
Media type code
c
Sound
unknown sound
Specific material designation
remote

Library Locations

    • Folsom LibraryBorrow it
      110 8th St, Troy, NY, 12180, US
      42.729766 -73.682577
Processing Feedback ...