Which Papers Won At 35th AAAI Conference On Artificial Intelligence? – Analytics India Magazine

The 35th AAAI Conference on Artificial Intelligence (AAAI-21), held virtually this year, saw more than 9,000 paper submissions, of which, only 1,692 research papers made the cut.

The Association for the Advancement of Artificial Intelligence (AAAI) committee has announced the Best Paper and Runners Up awards. Lets take a look at the papers that won the awards.

About: Informer is an efficient transformer-based model for Long Sequence Time-series Forecasting (LSTF). A team of researchers from UC Berkeley introduced this Transformer model to predict long sequences. Informer has three distinctive characteristics:

Read the paper here.

About: Exploration-exploitation is a powerful tool in multi-agent learning (MAL). A team of researchers from Singapore University of Technology studied a variant of stateless Q-learning, with softmax or Boltzmann exploration, also termed as Boltzmann Q-learning or smooth Q-learning (SQL). Boltzmann Q-learning is one of the most fundamental models of exploration-exploitation in MAS.

Read the paper here.

About: Researchers from Dartmouth College, University of Texas and ProtagoLabs described metrics for measuring political bias in GPT-2 generation and proposed a reinforcement learning (RL) framework to reduce political biases in the generated text. Using rewards from word embeddings or a classifier, the RL framework guided the debiased generation without having access to the training data or requiring the model to be retrained. The researchers also proposed two bias metrics (indirect bias and direct bias) to quantify the political bias in language model generation.

Read the paper here.

About: Researchers from Amazon and UC Berkeley studied the problem of batch learning from bandit feedback in extremely large action spaces. They introduced a selective importance sampling estimator (sIS) operating in a significantly more favorable bias-variance regime. The sIS estimator is obtained by performing importance sampling on the conditional expectation of the reward concerning a small subset of actions for each instance.

Read the paper here.

About: Researchers from Microsoft and Beihang University proposed a self-attention attribution algorithm to interpret the information interactions inside the Transformer. As part of the research, the scientists first extracted the most salient dependencies in each layer to construct an attribution graph, which reveals the hierarchical interactions inside the Transformer. Next, they applied self attention attribution to identify the important attention head. Finally, they showed that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.

Read the paper here.

About: Researchers from Harvard University and Carnegie Mellon University introduced LIZARD, an algorithm that accounts for decomposability of the reward function, smoothness of the decomposed reward function across features, monotonicity of rewards as patrollers exert more effort, and availability of historical data. According to them, LIZARD leverages both decomposability and Lipschitz continuity simultaneously, bridging the gap between combinatorial and Lipschitz bandits.

Read the paper here.

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box. Contact: [emailprotected]

Read this article:
Which Papers Won At 35th AAAI Conference On Artificial Intelligence? - Analytics India Magazine

Related Posts
This entry was posted in $1$s. Bookmark the permalink.