IEEE CIS  |  CIM  |  TNNLS  |  TEVC  |  TFS  |  TCDS  |  TETCI  |  TG  |  TAI
 
 
 
 
IEEE CIS Research Frontier, Issue 113, June 2022
 
 
 
 
Announcements
 
 
 
 

SSCI 2022 Call for Papers

SSCI2021.JPG

IEEE Symposium Series On Computational Intelligence (SSCI 2022) is an established flagship annual international series of symposia on computational intelligence sponsored by the IEEE Computational Intelligence Society to promote and stimulate discussion on the latest theory, algorithms, applications and emerging topics on computational intelligence. By co-locating multiple symposia under one roof, each dedicated to a specific topic in the CI domain, IEEE SSCI aims to encourage cross-fertilization of ideas and provide a unique platform for top researchers, professionals, and students from all around the world to discuss and present their findings. IEEE SSCI 2022 will feature keynote addresses, tutorials, panel discussions and special sessions, all of which are open to all participants. The conference proceedings of the IEEE SSCI will be included in the IEEE Xplore and indexed by all major databases.

Please refer to the list of symposia here to find the most relevant forum for your paper.

Important Dates

  • Paper Submission: Friday, 1 July 2022
  • Paper Acceptance: Thursday, 1 September 2022
  • Full Manuscript Submission: Monday. 19 September 2022
  • Early Registration: Monday, 26 September 2022
  • Conference Dates: 4th - 7 December 2022

For more information for SSCI Call for Papers please click here.

 
 
 
 

Call for Associate Editors for the IEEE TNNLS

IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS) publishes technical articles that deal with the theory, design, and applications of neural networks and related learning systems. There are available positions for the Editorial Board of the TNNLS. For more information and how to apply please visit our website.

Deadline for applying: 30 June 2022

Notifications: 30 July 2022  

Please note that all applications will be carefully reviewed considering several elements, including: prior editorial experience, topics of expertise, and publications record. Note however that all candidates who get pre-selected are subject to the approval of the Vice-President for Publications and of the President of the IEEE Computational Intelligence Society. Female candidates and people with affiliations in industry and/or government are strongly encouraged to apply.

 
 
 
 
Research Frontier
 
 
 
 

Hands-On Bayesian Neural Networks--A Tutorial for Deep Learning Users

3D Neural network with six layers.
 Illustration of structured big data for presentations, banners, posters.
 Artificial intelligence, machine learning or deep learning computing.
 Vector illustration.Modern deep learning methods constitute incredibly powerful tools to tackle a myriad of challenging problems. However, since deep learning methods operate as black boxes, the uncertainty associated with their predictions is often challenging to quantify. Bayesian statistics offer a formalism to understand and quantify the uncertainty associated with deep neural network predictions. This tutorial provides deep learning practitioners with an overview of the relevant literature and a complete toolset to design, implement, train, use and evaluate Bayesian neural networks, i.e., stochastic artificial neural networks trained using Bayesian methods. Read More

IEEE Computational Intelligence Magazine, May 2022

 
 
 
 

Conceptual Game Expansion


Computer games development concept landing page stock illustration.Automated game design is the problem of automatically producing games through computational processes. Traditionally, these methods have relied on the authoring of search spaces by a designer, defining the space of all possible games for the system to the author. In this article, we instead learn representations of existing games from gameplay video and use these to approximate a search space of novel games. In a human subject study, we demonstrate that these novel games are indistinguishable from human games in terms of challenge and that one of the novel games was equivalent to one of the human games in terms of fun, frustration, and likeability. Read More


IEEE Transactions on Games, March 2022

 
 
 
 
 
 
 
 

Multi-Task Particle Swarm Optimization With Dynamic Neighbor and Level-Based Inter-Task Learning


happy business man with multitasking skills sitting at his laptop with office icons on a backgroundExisting multifactorial particle swarm optimization algorithms treat all particles equally with a consistent inter-task exemplar selection and generation strategy. This may lead to poor performance when the algorithm searches partial optimal areas belonging to different tasks at the later stage. In pedagogy, teachers teach students in different levels distinctively under their cognitive and learning abilities. Inspired by this idea, in this work, we devise a novel level-based inter-task learning strategy upon a dynamic local topology of inter-task particles. The proposed method separates particles into several levels and assigns particles to different levels with distinct inter-task learning methods. Specifically, we propose a level-based inter-task learning strategy to transfer sharing information among the cross-task neighborhood. By assigning the particles with diverse search preferences, the algorithm is able to explore the search space by using the cross-task knowledge, while reserving an ability to refine the search area. Read More


IEEE Transactions on Emerging Topics in Computational Intelligence, April 2022

 
 
 
 

Behavior Decision of Mobile Robot With a Neurophysiologically Motivated Reinforcement Learning Model


Path leads to decision which changes the path in two directions stock photoOnline model-free reinforcement learning (RL) approaches play a crucial role in coping with the real-world applications, such as the behavioral decision making in robotics. How to balance the exploration and exploitation processes is a central problem in RL. A balanced ratio of exploration/exploitation has a great influence on the total learning time and the quality of the learned strategy. Therefore, various action selection policies have been presented to obtain a balance between the exploration and exploitation procedures. However, these approaches are rarely, automatically, and dynamically regulated to the environment variations. One of the most amazing self-adaptation mechanisms in animals is their capacity to dynamically switch between exploration and exploitation strategies. This article proposes a novel neurophysiologically motivated model which simulates the role of medial prefrontal cortex (MPFC) and lateral prefrontal cortex (LPFC) in behavior decision. Read more


Transactions on Cognitive and Developmental Systems, March 2022

 
 
 
 

Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning


Deepfake 
Binary Background Concept stock illustrationDeep reinforcement learning (DRL) has numerous real-life applications ranging from autonomous driving to healthcare. It has demonstrated superhuman performance in playing complex games like Go. However, in recent years, many researchers have identified various vulnerabilities of DRL. Keeping this critical aspect in mind, in this article, we present a comprehensive survey of different attacks on DRL and various countermeasures that can be used for robustifying DRL. To the best of our knowledge, this survey is the first attempt at classifying the attacks based on the different components of the DRL pipeline. This article will provide a roadmap for the researchers and practitioners to develop robust DRL systems. Read more


IEEE Transactions on Artificial Intelligence, April 2022

 
 
 
 

Real-Time Federated Evolutionary Neural Architecture Search


Deep neural network vector illustration 
with light green and blue background for artificial intelligence stock illustrationFederated learning is a distributed machine learning approach to privacy preservation and two major technical challenges prevent a wider application of federated learning. One is that federated learning raises high demands on communication resources, since a large number of model parameters must be transmitted between the server and clients. The other challenge is that training large machine learning models such as deep neural networks in federated learning requires a large amount of computational resources, which may be unrealistic for edge devices such as mobile phones. The problem becomes worse when deep neural architecture search (NAS) is to be carried out in federated learning. To address the above challenges, we propose an evolutionary approach to real-time federated NAS that not only optimizes the model performance but also reduces the local payload. Read more


IEEE Transactions on Evolutionary Computation, April 2022

 
 
 
 

Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling


Futuristic digital shape from dots and lines.
 Network connection structure.
 Big data visualization.
 3D rendering.
 stock photoIn graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties, and they are fundamental for building deep GNNs that learn hierarchical representations. In this work, we propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology. During training, the GNN learns new node representations and fits them to a pyramid of coarsened graphs, which is computed offline in a preprocessing stage. NDP consists of three steps. First, a node decimation procedure selects the nodes belonging to one side of the partition identified by a spectral algorithm that approximates the MAXCUT solution. Afterward, the selected nodes are connected with Kron reduction to form the coarsened graph. Finally, since the resulting graph is very dense, we apply a sparsification procedure that prunes the adjacency matrix of the coarsened graph to reduce the computational cost in the GNN. Read more


IEEE Transactions on Neural Networks and Learning Systems, May 2022

 
 
 
 
Editor Bing Xue
Victoria University of Wellington, New Zealand
Email: [email protected]

 
 
 
 
 
 
{{my.Comm Preferences:default=edit me}}