Osaka Univarsity AI・Date Seminar 101st, DEC5

Title: The Structure of Connections and Functional Representation Capabilities in Deep Neural Network Models
Speaker: Junpei Nagase (Specially Appointed Assistant Professor, Data Education Center and Industry-Academia-Government Collaboration Center, The University of Electro-Communications)
Abstract: This presentation discusses the connection structure of deep neural network models. Specifically, it demonstrates that when the activation function is assumed to be the ReLU function, any linear connection structure exhibiting linearity can be reduced to a serial connection such as a multilayer perceptron. Furthermore, it shows that a multilayer perceptron using the ReLU function as its activation function can be binarized while preserving its expressive power and output. Since these results are constructive, it is possible to write down specific parameters. We discuss the compressibility of actual models and introduce them as concrete transformation algorithms. Furthermore, these results can be extended to general piecewise linear functions where the activation function is not the ReLU function. However, note that these conclusions pertain solely to pre-trained and trained models and do not consider the performance or dynamics of models during training.

Date

5th December, 2025(Fri,)18:00~20:00

Venue

Held online

Organizer

Co-organizer (HRAM The Japan Society for Industrial and Applied Mathematics, D-DRIVE National Network)

Participation Fee

Free(Advance registration required)

https://www-mmds.sigmath.es.osaka-u.ac.jp/structure/activity/ai_data.php?id=102

web

https://www-mmds.sigmath.es.osaka-u.ac.jp/structure/activity/ai_data.php?id=102

Contact

Takashi Suzuki
suzuki@sigmath.es.osaka-u.ac.jp