Liza, Farhana Ferdousi, Grzes, Marek (2019) Relating RNN layers with the spectral WFA ranks in sequence modelling. In: ACL workshop on Deep Learning and Formal Languages: Building Bridges, 2 August 2019, Florence, Italy. (Unpublished) (KAR id:74240)
|
PDF
Pre-print
Language: English |
|
|
Download this file (PDF/319kB) |
Preview |
| Request a format suitable for use with assistive technology e.g. a screenreader | |
Abstract
We analyse Recurrent Neural Networks (RNNs) to understand the significance of multiple LSTM layers. We argue that the Weighted Finite-state Automata (WFA) trained using a spectral learning algorithm are helpful to analyse RNNs. Our results suggest that multiple LSTM layers in RNNs help learning distributed hidden states, but have a smaller impact on the ability to learn long-term dependencies. The analysis is based on the empirical results, however relevant theory (whenever possible) was discussed to justify and support our conclusions.
| Item Type: | Conference or workshop item (Speech) |
|---|---|
| Uncontrolled keywords: | deep learning, NLP, LSTM, WFA, spectral learning |
| Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.76 Computer software |
| Institutional Unit: | Schools > School of Computing |
| Former Institutional Unit: |
Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
|
| Depositing User: | Marek Grzes |
| Date Deposited: | 04 Jun 2019 11:14 UTC |
| Last Modified: | 20 May 2025 10:23 UTC |
| Resource URI: | https://kar.kent.ac.uk/id/eprint/74240 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):

https://orcid.org/0000-0003-4901-1539
Total Views
Total Views