De Wilde, Philippe (1998) A neural network model of a communication network with information servers. Neural Computing and Applications, 7 (1). pp. 26-36. ISSN 0941-0643. E-ISSN 1433-3058. (doi:10.1007/BF01413707) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:58059)
The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. | |
Official URL: https://doi.org/10.1007/BF01413707 |
Abstract
This paper models information flow in a communication network. The network consists of nodes that communicate with each other, and information servers that have a predominantly one-way communication to their customers. A neural network is used as a model for the communication network. The existence of multiple equilibria in the communication network is established. The network operator observes only one equilibrium, but if he knows the other equilibria, he can influence the free parameters, for example by providing extra bandwidth, so that the network settles in another equilibrium that is more profitable for the operator. The influence of several network parameters on the dynamics is studied both by simulation and by theoretical methods.
Item Type: | Article |
---|---|
DOI/Identification number: | 10.1007/BF01413707 |
Uncontrolled keywords: | communication networks, neural networks |
Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.87 Neural computers, neural networks |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Depositing User: | Philippe De Wilde |
Date Deposited: | 03 Jan 2023 16:48 UTC |
Last Modified: | 04 Jan 2023 11:00 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/58059 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):