<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Trinilearn]]></title><description><![CDATA[AI, Computer Science, and Applications]]></description><link>https://trinilearn.com/</link><generator>Ghost 1.22</generator><lastBuildDate>Fri, 12 Dec 2025 09:54:09 GMT</lastBuildDate><atom:link href="https://trinilearn.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Neural Style Transfer: 50 Shades of Miaw]]></title><description><![CDATA[A step-by-step tutorial on how to make horrible cat art with Neural Style Transfer and Tensorflow 2.1.]]></description><link>https://trinilearn.com/neural-style-transfer-50-shades-of-miaw/</link><guid isPermaLink="false">5e29d59548ba8e04f1cfae53</guid><category><![CDATA[AI]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Amélie Rolland]]></dc:creator><pubDate>Thu, 23 Jan 2020 17:41:38 GMT</pubDate><media:content url="https://trinilearn.com/content/images/2020/01/banner.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://trinilearn.com/content/images/2020/01/banner.jpg" alt="Neural Style Transfer: 50 Shades of Miaw"><p><em>A step-by-step tutorial on how to make horrible cat art with Neural Style Transfer and Tensorflow 2.1.</em></p>
<p>With only a few hours left on December 23rd, I decided to create a last-minute Christmas gift. I selected some pictures of my cat, used a pre-trained Neural Style Transfer model to create an artistic version of them, and merged the transformed pictures into a single image.</p>
<figure>
  <img src="https://trinilearn.com/content/images/2020/01/fina-atrocity.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>This is art.</figcaption>
</figure>
<p>The resulting piece of art is so vivid and colorful that it would make anyone throw up if they dared look at it for too long. And I came to this realization only after I spent 40$ on printing that thing on a canvas. Now this atrocity stares at me every time I sit in the kitchen, an endless reminder of my failed attempt at becoming an artist.</p>
<p>If you want to fail at cat art too, here is a step-by-step guide on how to do it with Tensorflow 2.1. The code is also available directly on <a href="https://github.com/a-ro/fifty-shades-of-miaw">github</a>.</p>
<h2 id="neuralstyletransfer">Neural Style Transfer</h2>
<p>A Neural Style Transfer model takes two images as input and generates a new image. The generated image combines the content of the first image with the style of the second one.</p>
<p>The <a href="https://arxiv.org/abs/1508.06576">original approach</a> was proposed in 2015 and was based on the idea that the representations of content and style in a Convolutional Neural Network (CNN) are separable. The model can minimize a content loss that depends on the first image and a style loss that depends on the second one. By optimizing both objectives simultaneously, a random image can iteratively be updated to combine the content and style of two distinct images.</p>
<p><img src="https://trinilearn.com/content/images/2020/01/small-example.jpg" alt="Neural Style Transfer: 50 Shades of Miaw"></p>
<p>If we have a CNN that was trained for a computer vision task, deeper layers of this model will detect more complex information (e.g. faces rather than lines). When two images with similar content are processed in a deep-enough layer of that CNN, we can expect their activations to be similar as well. Following this idea, the content loss can be defined as an L2 norm between the activations of two images in a single deep layer of the CNN.</p>
<p>For the style loss, the aim is to detect which texture components tend to occur together in an image. By computing a matrix of similarity over the channels of a specific CNN layer, we can obtain a &quot;correlation&quot; score that represents how much textures occur together or not. A style loss for a single layer can be computed by using the distance between the similarity matrix of the generated image and the one of the style image. This process is repeated for each layer of the CNN and the sum of these comparisons is used to obtain the complete style loss.</p>
<p>Neural Style Transfer has evolved over the years and other <a href="https://arxiv.org/abs/1705.06830">variants</a> have been proposed. These newer methods can learn to generate the image directly instead, resulting in improved computational performances. To learn more about Neural Style Transfer, you can check out this <a href="https://www.tensorflow.org/tutorials/generative/style_transfer">Tensorflow tutorial</a>.</p>
<h2 id="step1selectyourdatasets">Step 1: Select Your Datasets</h2>
<p>We need two types of datasets to use the Neural Style Transfer model.</p>
<h3 id="content">Content</h3>
<p>The first one is the content dataset that will contain our cat pictures. For this one I went through the 50,000 pictures I have of my cat and selected the best ones.</p>
   <figure>
  <img src="https://trinilearn.com/content/images/2020/01/content.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>My cat, Gandalf, after I told him he's adopted.</figcaption>
</figure>
<h3 id="style">Style</h3>
<p>The second dataset type is the style dataset. In this case, we're looking for pictures with interesting shapes and colors that will be used to transform the style of our original cat pictures. I ended up browsing Kaggle datasets until I found these two datasets:</p>
<figure>
  <img src="https://trinilearn.com/content/images/2020/01/style.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>Art Images vs Overwatch Heroes.</figcaption>
</figure>
<ul>
<li><a href="https://www.kaggle.com/thedownhill/art-images-drawings-painting-sculpture-engraving">Art Images</a>: 9000 images of 5 types of art including drawings and paintings.</li>
<li><a href="https://www.kaggle.com/renanmav/overwatch-heroes-recognition">Overwatch Heroes</a>: 2,291 images of Overwatch heroes.</li>
</ul>
<h2 id="step2installdependencies">Step 2: Install Dependencies</h2>
<p>The project has 3 main dependencies.</p>
<p><em>Pipfile</em></p>
<pre><code>[packages]
pillow = &quot;*&quot;
tensorflow-gpu = &quot;~=2.1&quot;
tensorflow-hub = &quot;*&quot;
</code></pre>
<p>You can install either <code>tensorflow-gpu</code> or <code>tensorflow</code>. We will use <code>tensorflow-hub</code> to load the pre-trained model and <code>pillow</code> to process the images.</p>
<p>You can install these dependencies from the <code>Pipfile.lock</code> <a href="https://github.com/a-ro/fifty-shades-of-miaw">here</a>.</p>
<h2 id="step3loadandresizeimages">Step 3: Load and Resize Images</h2>
<p>We'll use the file <code>image_processing.py</code> to pre-process the images. The function below first loads an image from a file and transforms it into a float tensor with 3 color channels. Then, the tensor is resized to have at least one dimension (height/width) equal to 512 while preserving the aspect ratio. Finally, we add an additional axis to represent the batch size. In our case, we'll process one image at the time so the first axis will always be of size one.</p>
<p><em>image_processing.py</em></p>
<pre><code class="language-python">import tensorflow as tf


def load_image_tensor(image_path: str, max_dim: int = 512) -&gt; tf.Tensor:
    image_tensor = tf.io.read_file(image_path)
    image_tensor = tf.image.decode_image(
        image_tensor, channels=3, dtype=tf.float32
    )
    image_tensor = tf.image.resize(
        image_tensor, (max_dim, max_dim), preserve_aspect_ratio=True
    )
    image_tensor = image_tensor[tf.newaxis, :]
    return image_tensor
</code></pre>
<p>For example, if we load an image of size  1393 x 1943, then <code>image_tensor</code> will have the shapes:</p>
<ul>
<li>(1393 x 1943 x 3) after decoding,</li>
<li>(367 x 512 x 3) after resizing,</li>
<li>(1 x 367 x 512 x 3) after adding a new axis.</li>
</ul>
<h2 id="step4createanimagetensorgenerator">Step 4: Create an Image Tensor Generator</h2>
<p>In <code>loader.py</code>, we'll create a generator that calls the <code>load_image_tensor</code> function of the previous step. The function loops through the files of a specified image directory. Each image is converted to a tensor and is then returned to the caller.</p>
<p><em>loader.py</em></p>
<pre><code class="language-python">from glob import glob
from os.path import join
from typing import Iterator

import tensorflow as tf

from fifty_shades.image_processing import load_image_tensor


def generate_image_tensors(directory_path: str) -&gt; Iterator[tf.Tensor]:
    file_path_regex = join(directory_path, &quot;*&quot;)
    for file_path in sorted(glob(file_path_regex)):
        image_tensor = load_image_tensor(file_path)
        yield image_tensor
</code></pre>
<h2 id="step5transformandsaveimages">Step 5: Transform and Save Images</h2>
<p>In <code>model.py</code> we'll create an adapter class for our pre-trained model to transform and save our images. The constructor of that class downloads a pre-trained model from Tensorflow Hub. You can modify the version of the model depending on which Neural Style Transfer approach you want to use. In my case I used the original approach (version 1).</p>
<p><em>model.py</em></p>
<pre><code class="language-python">from os.path import join
from typing import Union, Iterator

import tensorflow as tf
import tensorflow_hub as hub
from PIL import Image
from keras_preprocessing.image import array_to_img

from fifty_shades.image_processing import save_image_from_tensor


class NeuralStyleTransfer:
    def __init__(self):
        self.model = hub.load(
            &quot;https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1&quot;
        )

    def predict(
        self, content_tensor: tf.Tensor, style_tensor: tf.Tensor
    ) -&gt; Image:
        predicted_tensor = self.model(content_tensor, style_tensor)
        predicted_image = array_to_img(predicted_tensor[0][0])
        return predicted_image

    def predict_and_save(
        self,
        image_id: Union[str, int],
        content_tensor: tf.Tensor,
        style_tensor: tf.Tensor,
        save_directory_path: str,
    ) -&gt; Image:
        predicted_image = self.predict(content_tensor, style_tensor)
        predicted_image.save(join(save_directory_path, f&quot;{image_id}.png&quot;))
        save_image_from_tensor(
            save_directory_path, f&quot;{image_id}-content.png&quot;, content_tensor
        )
        save_image_from_tensor(
            save_directory_path, f&quot;{image_id}-style.png&quot;, style_tensor
        )
        return predicted_image

    def predict_and_save_all(
        self,
        content_tensors: Iterator[tf.Tensor],
        style_tensors: Iterator[tf.Tensor],
        save_directory_path: str,
    ) -&gt; None:
        for i, (content_tensor, style_tensor) in enumerate(
            zip(content_tensors, style_tensors)
        ):
            self.predict_and_save(
                i, content_tensor, style_tensor, save_directory_path,
            )
</code></pre>
<p>The <code>predict</code> method calls the model with the content and style images. The resulting tensor is transformed into an image with the <code>array_to_img</code> function of keras. The <code>predict_and_save</code> method saves the content, style, and transformed images after the prediction. In <code>predict_and_save_all</code>, we repeat this process for multiple content and style images.</p>
<p>I was only testing on a small image sample so I processed the images one at the time. But if you are testing on a larger set, it would be faster to make predictions for multiple images at once instead. To save our content and style images more easily, I also added the following helper function.</p>
<p><em>image_processing.py</em></p>
<pre><code>from os.path import join

import tensorflow as tf
from keras_preprocessing.image import array_to_img


def save_image_from_tensor(
    save_directory: str, file_name: str, tensor_image: tf.Tensor
) -&gt; None:
    save_path = join(save_directory, file_name)
    image = array_to_img(tensor_image[0])
    image.save(save_path)
</code></pre>
<h2 id="step6runthemodel">Step 6: Run the Model</h2>
<p>We now have all the pieces in place to run our model. In the file <code>example.py</code>, we first create a cat and a style generators that will be used to iterate over our image tensors. Then, the model is called to transform the images and save the result.</p>
<p><em>example.py</em></p>
<pre><code class="language-python">
from fifty_shades.loader import generate_image_tensors


save_directory = &quot;my/save/directory&quot;
cat_generator = generate_image_tensors(&quot;cat/images/directory&quot;)
style_generator = generate_image_tensors(&quot;style/images/directory&quot;)
model = NeuralStyleTransfer()
model.predict_and_save_all(cat_generator, style_generator, save_directory)
</code></pre>
<p>Here is the result on a couple of sample images.</p>
<figure>
  <img src="https://trinilearn.com/content/images/2020/01/weird-small.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>How to transform your weird cat selfies into something weirder.</figcaption>
</figure>
<h2 id="step7makeacanvas">Step 7: Make a Canvas</h2>
<p>If you want to print these images on a canvas like mine (hopefully yours will look better), find a photo store near you. They usually have an online tool where you can manually select the images you want to use and combine them.</p>
<figure>
  <img src="https://trinilearn.com/content/images/2020/01/atrocity-small.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>From cat pictures to art failure.</figcaption>
</figure>
<h2 id="yourturn">Your Turn</h2>
<p>Now try it for yourself on your own data. Go outside, steal a cat, take pictures, transform them, and build a canvas. It's your turn to ruin someone's kitchen with your art.</p>
<figure>
  <img src="https://trinilearn.com/content/images/2020/01/bromance.jpg" alt="Neural Style Transfer: 50 Shades of Miaw">
  <figcaption>Picture taken moments before Nelson (right cat) was kidnapped by an old woman and sequestered in her house for 4 days.</figcaption>
</figure>
<p>If you want to learn more on Neural Style Transfer, here is a list of helpful resources:</p>
<ul>
<li>Deeplearning.ai <a href="https://www.youtube.com/watch?v=R39tWYYKNcI">videos</a> that explain the original Neural Style Transfer approach.</li>
<li>A <a href="https://www.tensorflow.org/tutorials/generative/style_transfer">Tensorflow tutorial</a> on Neural Style Transfer.</li>
<li>The original paper: <a href="https://arxiv.org/abs/1508.06576">A Neural Algorithm of Artistic Style</a>.</li>
</ul>
</div>]]></content:encoded></item><item><title><![CDATA[Top 5 Talks from the Quebec AI Meetup of 2019]]></title><description><![CDATA[The Artificial Intelligence Meetup in Quebec City had over 600 participants this year and highlighted research from Google Brain, Siemens, and other industries and universities. Here is a summary of my 5 favorite talks of the meetup that focused on reinforcement learning, creativity, and healthcare.]]></description><link>https://trinilearn.com/top-5-talks-from-the-quebec-ai-meetup-of-2019/</link><guid isPermaLink="false">5cbfccebdf5ad105138070a8</guid><category><![CDATA[AI]]></category><category><![CDATA[Conference]]></category><dc:creator><![CDATA[Amélie Rolland]]></dc:creator><pubDate>Wed, 24 Apr 2019 02:45:53 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1544531586-fde5298cdd40?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1544531586-fde5298cdd40?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Top 5 Talks from the Quebec AI Meetup of 2019"><p>The <a href="https://www.itis.ulaval.ca/cms/site/itis/itis/Semaine_num_19/RDV_IA">Artificial Intelligence Meetup in Quebec City</a> had over 600 participants this year and highlighted research from Google Brain, Siemens, and many other industries and universities. This second edition took place at the Port of Quebec on April 8 and included two conference tracks with 20 presentations.</p>
<p>The focus of this year was mostly on reinforcement learning, creativity, healthcare, smart cities, and IoT. Here is a summary of my 5 favorite talks of the meetup and a concluding remark on ethics.</p>
<h1 id="reinforcementlearninganddeepneuralnetworks">Reinforcement Learning and Deep Neural Networks</h1>
<p><em><a href="http://www.marcgbellemare.info/"><strong>Marc G. Bellemare</strong></a>, Senior Researcher at Google Brain and Adjunct Professor at McGill University</em></p>
<p>Marc introduced the concepts of reinforcement learning by showing how to make a &quot;Pâté Chinois&quot; recipe. The reinforcement learning agent starts at the first <em>state</em>: an empty plate. Then, the agent must choose an <em>action</em> to transition to a new state. In this case, an action consists of choosing an ingredient (e.g. chopped steak, corn, or potato), and the new state could be a plate with chopped steak. The goal of the agent is to choose a sequence of actions that will maximize a long-term <em>reward</em>, like succeeding the complete recipe.</p>
<p>Marc has contributed to the creation of the <a href="https://github.com/mgbellemare/Arcade-Learning-Environment">Arcade Learning Environment</a><sup><a href="#ref-bellemare2013arcade">1</a></sup> that serves as an interface to hundreds of Atari 2600 games. The goal of the agent in this environment is to choose the correct sequence of actions to win the game and obtain the highest score. The agent receives the current image of the game as input (state), and chooses an action that simulates pressing a button on a joystick controller. To assess the performances of these agents in new situations, a subset of games is used to train the agents and a distinct set is used for evaluation. Since these Atari games are considerably diversified and were created by independent groups, they provide an interesting way to benchmark reinforcement learning agents and assess their general competency.<br>
<img src="https://trinilearn.com/content/images/2019/04/atari.jpg" alt="Top 5 Talks from the Quebec AI Meetup of 2019"><br>
In 2015, Marc and his team developed DeepQ-network (DQN)<sup><a href="#ref-mnih2015human">2</a></sup>, an agent that combines reinforcement learning with Convolutional Neural Networks (CNN). Previous reinforcement learning approaches were mostly based on linear representations and required to manually define how to transform an image into a relevant set of features (i.e. feature engineering). In contrast, CNNs are known to perform especially well on image classification tasks and can derive their own feature representation of the image. The combination of a reinforcement learning agent with a CNN produced an end-to-end agent that became more robust to changes across different games.</p>
<h1 id="duckietown">Duckietown</h1>
<p><em><a href="http://liampaull.ca/"><strong>Liam Paull</strong></a>, Assistant Professor, Computer Science and Operational Research Department, Montreal University</em></p>
<p>As a researcher working on autonomous cars, Liam has spent a lot of time on the engineering problem rather than the algorithmic one. He went on to find a solution to test his research in a simpler environment. Here was born <a href="https://www.duckietown.org/">Duckietown</a>, a miniature model of a town where citizens are ducks. The autonomous car consisting of a Raspberry Pi with a camera must navigate the roads of Duckietown while respecting the traffic lights and avoiding pedestrians. The environment is <a href="https://github.com/duckietown/Software">open-source</a>, reproducible at home, and competition benchmarks are available. Live Duckiebot competitions will be held at <a href="https://www.icra2019.org/">ICRA</a> and <a href="https://nips.cc/">NeurIPS</a> this year.<br>
<img src="https://trinilearn.com/content/images/2019/04/ducks.jpg" alt="Top 5 Talks from the Quebec AI Meetup of 2019"></p>
<h1 id="theconstructionofgenerativemusicalmodelsandhowtousethemcreatively">The Construction of Generative Musical Models and How to Use them Creatively</h1>
<p><em><a href="https://ca.linkedin.com/in/pablo-samuel-castro-2113641b"><strong>Pablo Samuel Castro</strong></a>, Researcher, Google Brain</em></p>
<p>Pablo is a researcher and musician who uses machine learning models to generate music. He created the <a href="https://www.reimagine.ai/lyricai">LyricAI</a> system that helped the musician David Usher write new lyrics.</p>
<p>While the previous version of LyricAI was based on a Recurrent Neural Network (RNN), the new version<sup><a href="#ref-castro2018combining">3</a></sup> currently relies on Transformer models<sup><a href="#ref-vaswani2017attention">4</a></sup>. Transformer models don't need recurrent layers (as used by RNN), but require instead a self-attention mechanism. This mechanism is especially important to help the model look at similar words far in the past of the sequence and keep the next generated words coherent (i.e. long-term dependencies).</p>
<p>Lyric AI uses two Transformer models. The first one is used to generate the grammatical structure of the lyrics. This model is trained on a lyric dataset where a lyric line is fed as input, and the model must predict the Part-of-Speech (PoS) tags of the next line. The second model is used to generate the words in the lyrics. In that case, the model is trained on a book dataset where sentences are divided into two parts. Given the words in first part of the sentence and the PoS tags of the second part, the model must predict the sequence of words appearing in the second part of the sentence.<br>
<img src="https://trinilearn.com/content/images/2019/04/piano.jpg" alt="Top 5 Talks from the Quebec AI Meetup of 2019"><br>
Pablo ended his talk by doing a live demo of creative machine learning models. He used an RNN model to generate drum beats, started a piano improvisation, and let another model generate the remaining piano melody.</p>
<h1 id="aimlandhealthcarenowandthefuture">AI (ML) and Healthcare Now and the Future</h1>
<p><em><a href="http://goldenberglab.ca/"><strong>Anna Goldenberg</strong></a>, Senior Scientist at the Hospital for Sick Children and Assistant Professor at the University of Toronto</em></p>
<p>Machine learning has many applications in healthcare. Anna and her team have used machine learning approaches to tailor treatments to similar groups of individuals<sup><a href="#ref-saria2015subtyping">5</a></sup>, predict the age of cancer onset for people at high risks<sup><a href="#ref-erdman2017age">6</a></sup>, and predict malignancy of thyroid cancer to reduce unnecessary surgeries by 20%.</p>
<p>Despite these advances, there are different problems with machine learning models that prevent them from being used in healthcare. One reason is that policies in place can interfere with the data. For example, patients with asthma start with more aggressive treatments for pneumonia because of their higher risks. A model trained on the outcome of patients could then wrongly infer that asthma leads to a higher chance of survival. Studies are also required to compare the prediction of a model with those of the experts to prove that the model causes no harm before being deployed. Once the model is deployed, updates can be harder to implement because of FDA regulations, leading to lower performance over time.</p>
<h1 id="banditalgorithmsandtheirapplicationtoadaptiveclinicaltrials">Bandit Algorithms and their Application to Adaptive Clinical Trials</h1>
<p><em><a href="https://audurand.wordpress.com/a-propos/"><strong>Audrey Durand</strong></a>, Postdoctoral Student, McGill University</em></p>
<p>Audrey used an interactive reinforcement learning agent to select efficient treatments for mouses with skin cancer<sup><a href="#ref-durand2018contextual">7</a></sup>. The treatments in this clinical trial were assigned in a sequential way rather than at the beginning of the study. Hence, the effects of a treatment on the first patient were used to help the agent choose a treatment for the following patients.</p>
<p>The agent assigned treatments by estimating the probability of success of different options and compromising exploration and exploitation phases. The agent <em>exploits</em> when it assigns the optimal treatment according to current information. In contrast, the agent <em>explores</em> when it assigns a treatment to acquire additional information.<br>
<img src="https://trinilearn.com/content/images/2019/04/pills.jpg" alt="Top 5 Talks from the Quebec AI Meetup of 2019"><br>
During the trial, the life expectancy of mouses increased over time and the variance diminished. This behavior is expected because the time spent in the exploration phase decreases as the study progresses. The agent also learned to alternate two treatments over time, which helped the mouses to recover from high doses of chemotherapy.</p>
<h1 id="aclosingnoteonethics">A Closing Note on Ethics</h1>
<p>The meetup ended on a subject that is becoming increasingly popular among the AI community: ethical impacts of AI systems. François Laviolette, Director of the Big Data Research Center at Laval University, presented different ethical concerns surrounding AI systems. One example is the <a href="https://en.wikipedia.org/wiki/Cambridge_Analytica">Cambridge Analytica</a> scandal where personal data of millions of Facebook users was used without their consent to influence U.S. votes. Other concerns include fairness, privacy, and accountability of AI systems, as well as their impacts on job security.</p>
<p>To address these concerns, François and Lyse Langlois announced the creation of the <a href="https://observatoire-ia.ulaval.ca/">Observatory on the Societal Impacts of AI</a>. The observatory regroups 18 educational institutions and 160 researchers who will work to minimize the negative impacts of AI systems. If you or your company are interested in developing responsible AI systems, you are invited to sign the <a href="https://www.montrealdeclaration-responsibleai.com/context">Montreal Declaration for a Responsible Development of Artificial Intelligence</a>.</p>
<h1 id="references">References</h1>
<div id="refs" class="references">
<div id="ref-bellemare2013arcade">
<p>1. Bellemare, Marc G, Yavar Naddaf, Joel Veness, and Michael Bowling. 2013. “The Arcade Learning Environment: An Evaluation Platform for General Agents.” <em>Journal of Artificial Intelligence Research</em> 47. <a href="https://www.jair.org/index.php/jair/article/view/10819" class="uri">https://www.jair.org/index.php/jair/article/view/10819</a>: 253–79.</p>
</div>
<div id="ref-mnih2015human">
<p>2. Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, et al. 2015. “Human-Level Control Through Deep Reinforcement Learning.” <em>Nature</em> 518 (7540). <a href="https://www.nature.com/articles/nature14236" class="uri">https://www.nature.com/articles/nature14236</a>; Nature Publishing Group: 529.</p>
</div>
<div id="ref-castro2018combining">
<p>3. Castro, Pablo Samuel, and Maria Attarian. 2018. “Combining Learned Lyrical Structures and Vocabulary for Improved Lyric Generation.” <em>ArXiv Preprint ArXiv:1811.04651</em>. <a href="https://arxiv.org/abs/1811.04651" class="uri">https://arxiv.org/abs/1811.04651</a>.</p>
</div>
<div id="ref-vaswani2017attention">
<p>4. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In <em>Advances in Neural Information Processing Systems</em>, 5998–6008. <a href="http://papers.nips.cc/paper/7181-attention-is-all-you-need" class="uri">http://papers.nips.cc/paper/7181-attention-is-all-you-need</a>.</p>
</div>
<div id="ref-saria2015subtyping">
<p>5. Saria, Suchi, and Anna Goldenberg. 2015. “Subtyping: What It Is and Its Role in Precision Medicine.” <em>IEEE Intelligent Systems</em> 30 (4). <a href="https://ieeexplore.ieee.org/abstract/document/7156005" class="uri">https://ieeexplore.ieee.org/abstract/document/7156005</a>; IEEE: 70–75.</p>
</div>
<div id="ref-erdman2017age">
<p>6. Erdman, Lauren, Ben Brew, Jason Berman, Adam Shlien, Andrea Doria, David Malkin, and Anna Goldenberg. 2017. “Age of Cancer Onset Differentiated by Sex and TP53 Codon Change in Li-Fraumeni Syndrome Patient Population.” <a href="http://cancerres.aacrjournals.org/content/77/13_Supplement/3409.short" class="uri">http://cancerres.aacrjournals.org/content/77/13_Supplement/3409.short</a>; AACR.</p>
</div>
<div id="ref-durand2018contextual">
<p>7. Durand, Audrey, Charis Achilleos, Demetris Iacovides, Katerina Strati, Georgios D Mitsis, and Joelle Pineau. 2018. “Contextual Bandits for Adapting Treatment in a Mouse Model of de Novo Carcinogenesis.” In <em>Machine Learning for Healthcare Conference</em>, 67–82. <a href="http://proceedings.mlr.press/v85/durand18a.html" class="uri">http://proceedings.mlr.press/v85/durand18a.html</a>.</p>
</div>
</div>
</div>]]></content:encoded></item><item><title><![CDATA[Cabane.io, a New Developer Conference in Quebec City]]></title><description><![CDATA[A desire to increase opportunities for software developers to learn and share their knowledge in Quebec City ignited the creation of Cabane.io, a new developer conference rich in live demos and code examples.
]]></description><link>https://trinilearn.com/cabane-io-first-edition/</link><guid isPermaLink="false">5c4e093edf5ad10513807091</guid><category><![CDATA[Computer Science]]></category><category><![CDATA[Conference]]></category><dc:creator><![CDATA[Amélie Rolland]]></dc:creator><pubDate>Sun, 27 Jan 2019 19:45:01 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1533092703529-ebd8da81d519?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://images.unsplash.com/photo-1533092703529-ebd8da81d519?ixlib=rb-1.2.1&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=1080&fit=max&ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Cabane.io, a New Developer Conference in Quebec City"><p>A desire to increase opportunities for software developers to learn and share their knowledge in Quebec City ignited the creation of <a href="https://cabane.io/">Cabane.io</a>, a new conference for developers. The idea that originated from a conversation over a beer was concretized on January 19 when 120 participants entered the <em>Musée national des beaux-arts du Québec</em> for the first edition of Cabane.io.</p>
<p>The conference created by <a href="https://www.linkedin.com/in/vincent-seguin-b31a7020">Vincent Seguin</a> and <a href="https://www.linkedin.com/in/guisim">Guillaume Simard</a>, two software developers from the region, instantly captured the interest of the community and was sold out in less than 7 hours. The presentations were rich in live demos and code examples with subjects covering programming languages, front-end development, machine learning, DevOps, and more.</p>
<p>The conference was comprised of 6 talks of 45 minutes and 5 lightning talks of 10 minutes. Here is a quick overview of all the presentations with links to the videos, slides and code when available.</p>
<p>Some slides are only available on the <a href="https://cabane-io.slack.com">slack channel</a> of Cabane.io, you can create a free account <a href="https://cabane-io.slack.com/join/shared_invite/enQtNTE0MDA1MTgzODI4LTM0MDQwODYzOWRjNDg3YjJjMmFlM2Q1Njk5MzgyMDJkOGM1NmUyYjU3MGZkNDI2YWViOGZjN2ZlNDQzNTVmYjU">here</a> to access them.</p>
<h1 id="mainconferencetalks">Main Conference Talks</h1>
<h2 id="reactatscale">React at Scale</h2>
<p><em><a href="https://www.linkedin.com/in/williamfortin"><strong>William Fortin</strong></a>, Engineering Lead, <a href="https://www.pleo.io">Pleo.io</a></em></p>
<p>William (a.k.a. Martha Stewart) showed a live recipe to convert JavaScript code into TypeScript, and took pre-baked code from his oven to present <a href="https://reactjs.org/docs/higher-order-components.html">Higher-Order Components</a>, <a href="https://reactjs.org/docs/render-props.html">Render Props</a>, and <a href="https://reactjs.org/docs/hooks-intro.html">Hooks</a> with <a href="https://reactjs.org/">React</a>. With his live demo, William showed that enterprises can gradually adopt React by iteratively focusing on small rewrites.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/5JiXkiDLrdM">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://github.com/wfortin/cabane-twitter-client/blob/master/react-talk.pdf">slides</a></span> &nbsp <span><i class="ic ic-link"></i><a href="https://github.com/wfortin/cabane-twitter-client">code</a></span></p>
<h2 id="elixirafunctionalremedytowebdevelopment">Elixir, a Functional Remedy to Web Development</h2>
<p><em><a href="https://www.linkedin.com/in/gcauchon"><strong>Guillaume Cauchon</strong></a>, Software Engineer, <a href="https://www.mirego.com/">Mirego</a></em></p>
<p><a href="https://elixir-lang.org/">Elixir</a> is a functional language designed for scalability and maintainability with a syntax inspired by <a href="https://www.ruby-lang.org">Ruby</a>. By diving into the language, configurations, tests, and deployment, Guillaume showed that Elixir might just be the solution to propel your Web application to the next level.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/pfnL6rD0WM8">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://speakerdeck.com/gcauchon/elixir-remede-fonctionnel-pour-le-developpement-web">slides</a></span> &nbsp <span><i class="ic ic-link"></i><a href="https://github.com/gcauchon/cabane.io__web_remedy">code</a></span></p>
<h2 id="streamprocessingonawsusingthekappaarchitecture">Stream Processing on AWS Using the Kappa Architecture</h2>
<p><em><a href="https://www.linkedin.com/in/joeybg"><strong>Joey Bolduc-Gilbert</strong></a>, Software Developer &amp; Team Lead, <a href="https://www.xpertsea.com/">XpertSea</a></em></p>
<p>XpertSea analyzes information about aquatic organisms by using IoT devices to collect aquaculture data. Joey presented the core system of their data platform based on the <a href="http://www.kappa-architecture.com/">Kappa Architecture</a> for real-time stream processing, a simplified approach that avoids relying on a batch processing layer and where the code base depends instead on a single processing framework.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/y8I1WY-IIv0">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://www.slideshare.net/JoeyBolducGilbert/case-study-stream-processing-on-aws-using-kappa-architecture">slides</a></span></p>
<h2 id="neuralnetworkinpython">Neural Network in Python</h2>
<p><em><a href="https://www.linkedin.com/in/carl-chouinard-5066823"><strong>Carl Chouinard</strong></a>, Chief AI Officer, <a href="https://vooban.com/">Vooban</a></em></p>
<p>Carl presented a Python implementation of a Neural Network created from scratch to classify images of numbers using the well-known <a href="http://yann.lecun.com/exdb/mnist/">MNIST dataset</a>. From layers and activation functions to back-propagation and evaluation, Carl gave an overview of the building blocks behind Neural Networks.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/mOmWp-rE_m0">video</a></span></p>
<h2 id="creatingaterraformprovider">Creating a Terraform Provider</h2>
<p><em><a href="https://www.linkedin.com/in/atheriault2"><strong>André Thériault</strong></a>, Software Developer, <a href="https://www.coveo.com">Coveo</a></em><br>
<em><a href="https://www.linkedin.com/in/maxime-coulombe-13305b81"><strong>Maxime Coulombe</strong></a>, Software Developer, <a href="https://www.coveo.com">Coveo</a></em></p>
<p><a href="https://www.terraform.io/">Terraform</a> is an open-source tool that uses declarative configuration files to create, manage, and update infrastructure resources. André and Maxime showed the power of Terraform by creating a custom Coveo provider and making a surprising revelation about a dreadful situation where the simplicity of a Terraform rollback saved the day.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/XR_bPZWFmak">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://docs.google.com/presentation/d/14WQbARhgh-MFC4e9vznSnyYR6tKBlaU3XDlr8JWfC2M/edit#slide=id.g437c571095_3_991">slides</a></span></p>
<h2 id="codereviewlikeaboss">Code Review Like a Boss</h2>
<p><em><a href="https://www.linkedin.com/in/marc-antoine-aub%C3%A9-4a988021/"><strong>Marc-Antoine Aubé</strong></a>, Programmer II, <a href="https://www.poka.io/">Poka</a></em></p>
<p>Code reviews are a way to collaborate, share knowledge, and improve the code base, but they don't always result in a positive experience. From a reviewer, an author, and a team standpoints, Marc-Antoine shared the best practices to transform code reviews in a learning and mentoring experience that everyone can enjoy.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/vaf4yHM3ZHc">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://speakerdeck.com/marcaube/faire-de-la-revue-de-code-comme-un-pro-cabane-quebec-2019">slides</a></span></p>
<h1 id="lightningtalks">Lightning Talks</h1>
<h2 id="customizationwithvuejs">Customization with Vue.js</h2>
<p><em><a href="https://www.linkedin.com/in/jeansebtr"><strong>Jean-Sébastien Tremblay</strong></a>, Developer, <a href="https://snipcart.com/">Snipcart</a></em></p>
<p>Snipcart is an HTML and JavaScript-based shopping cart that can be easily integrated in a Web application to add e-commerce functionalities. Jean-Sébastien presented how Snipcart made the switch to <a href="https://vuejs.org/">Vue.js</a> and were able to make their shopping cart overridable for customization purposes.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/L-j_vvhPaus">video</a></span></p>
<h2 id="deploymentwithkubeflow">Deployment with Kubeflow</h2>
<p><em><a href="https://www.linkedin.com/in/matehat"><strong>Mathieu D'Amours</strong></a>, Chief Technology Officer, <a href="http://www.braver.net">Braver</a></em></p>
<p><a href="https://www.kubeflow.org/">Kubeflow</a> combines the strength of <a href="https://kubernetes.io/">Kubernetes</a> and <a href="https://www.tensorflow.org/">Tensorflow</a> to simplify the deployments of machine learning workflows. Mathieu presented the different components of Kubeflow by diving into the core concepts behind Kubernetes, creating Tensorflow models with Jupiter Notebooks, and showing a quick example of Kubeflow at work.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/sMhJqnZUSMw">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://cabane-io.slack.com/messages/CEDBV2WB1/files/FFHUX4CDS/">slides</a></span></p>
<h2 id="templatingwithlithtml">Templating with lit-html</h2>
<p><em><a href="https://www.linkedin.com/in/pdesjardins90"><strong>Philippe Desjardins</strong></a>, Software Developer, <a href="https://www.xpertsea.com/">XpertSea</a></em></p>
<p>Philippe wants to create an amazing application to like and dislike different types of <em>Cabane</em>, but he's not impressed by current Javascript frameworks and libraries like React, Vue.js, and Angular. Armed with <a href="https://lit-html.polymer-project.org/">lit-html</a>, a new library to write HTML templates in JavaScript, Philippe created an efficient <em>Cabane</em> application right before our eyes.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/y5_OGjbU4C8">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://github.com/pdesjardins90/cabane-2019">code</a></span></p>
<h2 id="iottoservertowebsockets">IoT to Server to WebSockets</h2>
<p><em><a href="https://www.linkedin.com/in/slegare"><strong>Sébastien Légaré</strong></a>, Team Lead, <a href="https://vooban.com/">Vooban</a></em></p>
<p><a href="http://keeogo.com">Keeogo</a> is a lower-body exoskeleton designed to support those with mobility difficulties during daily activities. Sébastien presented how his team used WebSockets to solve the challenging task of connecting the Keeogo to the cloud for device updates, components integrity verification,   and users statistics visualization.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/dZYUDjsN9VM">video</a></span></p>
<h2 id="dontgetoxidizedwithrust">Don't Get Oxidized with Rust</h2>
<p><em><a href="https://www.linkedin.com/in/pierre-alexandre-st-jean-80a0b9143"><strong>Pierre-Alexandre St-Jean</strong></a>, Senior Software Engineer, <a href="https://www.pleo.io">Pleo.io</a></em></p>
<p>Pierre-Alexandre is a hipster programmer who worships the programming language <a href="https://www.rust-lang.org">Rust</a>. By impersonating a functional developer, an object-oriented programmer, and a Lisp guru, Pierre-Alexandre showed that Rust is adapted for different programming paradigms but there are many ways you can shoot yourself in the foot.</p>
<p><span><i class="ic ic-link"></i><a href="https://youtu.be/lQmthRjmhzQ">video</a></span> &nbsp<span><i class="ic ic-link"></i><a href="https://cabane-io.slack.com/messages/CEDBV2WB1/files/FFJGYKKT7/">slides</a></span></p>
<br>
<br>
<p>Want to know more about Cabane.io? Consult their <a href="https://cabane.io/">website</a>, join the <a href="https://cabane-io.slack.com/join/shared_invite/enQtNTE0MDA1MTgzODI4LTM0MDQwODYzOWRjNDg3YjJjMmFlM2Q1Njk5MzgyMDJkOGM1NmUyYjU3MGZkNDI2YWViOGZjN2ZlNDQzNTVmYjU">slack channel</a> or follow them on <a href="https://twitter.com/cabaneio">Twitter</a>, <a href="https://www.facebook.com/cabaneio/">Facebook</a>, and <a href="https://www.youtube.com/channel/UCHh1T6-UMXfTbecQv_fLVSA">YouTube</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[The Quebec AI Meetup of 2018]]></title><description><![CDATA[More than 500 attendees gathered last month for the Artificial Intelligence Meetup in Quebec City. Presentations from researchers and industry practitioners were focused on deep learning, natural language processing, computer vision, and  industrial applications of artificial intelligence.]]></description><link>https://trinilearn.com/the-quebec-ai-meetup-of-2018/</link><guid isPermaLink="false">5b04b446df5ad10513807077</guid><category><![CDATA[AI]]></category><category><![CDATA[Conference]]></category><dc:creator><![CDATA[Amélie Rolland]]></dc:creator><pubDate>Wed, 23 May 2018 01:20:06 GMT</pubDate><media:content url="https://trinilearn.com/content/images/2018/05/cover.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://trinilearn.com/content/images/2018/05/cover.jpg" alt="The Quebec AI Meetup of 2018"><p>More than 500 attendees gathered last month for the <a href="https://www.itis.ulaval.ca/cms/site/itis/rviaqc">Artificial Intelligence Meetup in Quebec City</a>. Presentations from researchers and industry practitioners were focused on deep learning, natural language processing, computer vision, and  industrial applications of artificial intelligence. This post presents a summary of each technical talk of the meetup.</p>
<h2 id="outline">Outline</h2>
<ol>
<li><a href="#bengio">Artificial Intelligence and Deep Learning - <em>Yoshua Bengio</em></a></li>
<li><a href="#laviolette">Artificial Intelligence: Between Potential and Possible - <em>François Laviolette</em></a></li>
<li><a href="#gagne">Overview of the AI Ecosystem in Quebec  - <em>Christian Gagné and Alexandra Masson</em></a></li>
<li><a href="#coveo">Coveo Machine Learning: Extensible Machine Learning Platform for Personalized Predictions - <em>Sébastien Paquet</em></a></li>
<li><a href="#gartner">How AI Helps Gartner Know (Almost) Everything on the Job Market Worldwide - <em>Andriy Burkov</em></a></li>
<li><a href="#khoury">Health and Toxicity in Online Conversations - <em>Richard Khoury</em></a></li>
<li><a href="#scaleai">SCALE.AI : AI-Powered Supply Chains - <em>Louis Roy</em></a></li>
<li><a href="#lalonde">Artificial Intelligence and Augmented Reality - <em>Jean-François Lalonde</em></a></li>
<li><a href="#cardinal">Towards Intelligent Nanoscopy: Artificial Intelligence Applications in the Study of Brain Molecular Mechanisms - <em>Flavie Lavoie-Cardinal</em></a></li>
<li><a href="#thales">Real Time Analysis of Decision Patterns and Bio-Behavioral Data for Augmented Human-Machine Systems - <em>Daniel Lafond</em></a></li>
<li><a href="#panel">Panel : Artificial Intelligence and Industry 4.0 in Quebec - <em>Alexandre Vallières, Jonathan Gaudreault, Yves Proteau, Sébastien Bujold, and Kevin Spahr</em></a></li>
</ol>
<h2 id="anamebengioa1artificialintelligenceanddeeplearning"><a name="bengio"></a> 1. Artificial Intelligence and Deep Learning</h2>
<p><em><strong>Yoshua Bengio</strong>,  Scientific Director, Montreal Institute for Learning Algorithms (MILA)</em></p>
<p>Artificial intelligence applications are everywhere. They can converse in natural language, recognize objects in images, play complex games like Go, and even detect cancer cells in images. These advances were made possible with the evolution of AI techniques over the years. While early stages of AI were mostly focused on the formalization of knowledge, we now use machine learning methods to let computers gather knowledge by themselves from observations. Now, the source of intelligence is data.</p>
<p>Deep learning is a set of machine learning methods based on neural networks. Concretized by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun in 2006, deep learning approaches are particularly good on perception tasks like interpreting sounds or images. Their ability to absorb a lot of information and adapt to different contexts have made them methods of choice for many machine learning tasks.</p>
<p>Deep learning methods create an implicit representation of the knowledge. As a result, they can transform data in a form that is more semantically interesting. This phenomenon was observed on a convolutional neural network that was trained to recognize scenes (e.g. office, restaurant) in images  <a href="#ref-zhou2014object">(Zhou, 2014)</a>. While the network had only access to a training set of images with scene labels, some computation units of the network had specialized to detect people, animals, and lighting. Hence, the model created a new representation that helped to generalize and improve its performances.</p>
<p>Learning representations is also useful for natural language processing tasks. Around 2000, Bengio and his team trained a neural network to learn word representations <a href="#ref-bengio2003neural">(Bengio, 2003)</a>. This trained model could transform a word into a real-valued vector such that words with similar meanings were close to each other in that vector space. For instance, a 2-dimensional representation of that space could show that &quot;was&quot; and &quot;were&quot; or &quot;come&quot; and &quot;go&quot; were neighbors in that space. This work was the foundation of word embeddings which are a key component of most natural language processing tasks today.</p>
<p>Deep learning algorithms can learn to generate images, music, and text. Generative adversarial networks (GANs) are a type of deep neural networks that learn to approximate the distribution of the data <a href="#ref-goodfellow2014generative">(Goodfellow, 2014)</a>. GANs are composed of two networks, the generative and discriminative networks, which compete against each other as adversaries. The generative network generates data and tries to fool the discriminative network, while the discriminative network acts as a cop and tries to detect whether the data is real or was generated. Both networks optimize a different objective and are trained in parallel.</p>
<p>The field of AI has made great progress over the years. However, we are still far from human intelligence. Most industrial applications of machine learning are currently based on supervised learning, where the machine learning model is trained on a dataset that contains the right answers for the task to learn. Conversely, humans tend to learn by interacting with their environment, in an unsupervised way.</p>
<h2 id="anamelaviolettea2artificialintelligencebetweenpotentialandpossible"><a name="laviolette"></a> 2. Artificial Intelligence: Between Potential and Possible</h2>
<p><em><strong>François Laviolette</strong>, Director, Big Data Research Center (CRDM), Laval University</em></p>
<p>Artificial intelligence is no longer restricted to large corporations. Companies of various sizes and sectors have benefited from adding AI into their core operations. But despite the potential of AI, there are some limitations.</p>
<p>AI systems are no better than the data they are trained on. <a href="https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.00000gjdpwwcfcus11t6oo6dw79gw">Microsoft's AI Chatbot Tay</a>, which learned to converse by interacting with internet users, started to send Nazi messages and was stopped 16 hours after its launch. While Tay is an extreme example, racism and sexism can be present in the data. Originally applied for domain adaptation problems, domain adversarial training of neural networks (DANN) can help to deal with fairness issues <a href="#ref-ajakan2014domain">(Ajakan, 2014)</a>. DANN can learn a new representation of the data that performs well on the task to learn, but performs poorly on a discriminative task like discriminating between men and women.</p>
<p>Problems can also arise when the training data is not representative of the reality. Consider a machine learning model that learns to classify images of cats and dogs. Suppose that each cat image in the training data contains a bowl of food, while each dog image contains a ball. A machine learning model trained on this data could learn to use only bowl and ball features to classify cats and dogs perfectly. The system would perform well on this particular dataset, but perform poorly in a real-life scenario.</p>
<p>Robustness is another issue with AI Systems. Small perturbations to input examples can make machine learning models output incorrect answers with high confidence. This phenomenon was observed on a neural network trained to classify images <a href="#ref-goodfellow2014explaining">(Goodfellow, 2014)</a>. The model correctly labeled a panda image with a 57% confidence. However, the same model labeled the panda image as a gibbon with 99% confidence after the input pixels were slightly altered.</p>
<p>AI systems have to potential to impact companies in many ways. But we need to understand their limitations. Above all, we need to be careful on how we choose to use the data we collect.</p>
<h2 id="anamegagnea3overviewoftheaiecosysteminquebec"><a name="gagne"></a> 3. Overview of the AI Ecosystem in Quebec</h2>
<p><em><strong>Christian Gagné</strong>, Professor, Department of Electrical and Computer Engineering, Laval University</em><br>
<em><strong>Alexandra Masson</strong>, Director - Innovation, Québec International</em></p>
<p>Quebec City is becoming a key player in the field of artificial intelligence.</p>
<p>Laval University has developed an AI expertise in both fundamental research and applications. Research groups such as the <a href="http://www.damas.ift.ulaval.ca/">DAMAS</a>, <a href="https://graal.ift.ulaval.ca/">GRAAL</a>, and <a href="http://reparti.gel.ulaval.ca/">REPARTI</a> are advancing AI research in fields like robotics, computer vision, natural language processing, and bioinformatics. Other groups like the <a href="http://crdm.ulaval.ca/">CRDM</a> and <a href="https://www.forac.ulaval.ca/en_bref/">FORAC</a> are working closely with the industry. The CRDM helps businesses with big data challenges and regroups researchers from five faculties.</p>
<p>Laval University also has an international visibility in AI. <a href="https://scholar.google.ca/citations?user=uwwWC3cAAAAJ">François Laviolette</a>, <a href="https://scholar.google.ca/citations?user=egixsbEAAAAJ">Christian Gagné</a>, <a href="https://scholar.google.com/citations?user=hW9fwNYAAAAJ&amp;hl">Jean-François Lalonde</a>, <a href="https://scholar.google.ca/citations?user=M792u2sAAAAJ">Mario Marchand</a>, <a href="https://scholar.google.com/citations?user=tgZPkzkAAAAJ&amp;hl">Philippe Giguère</a>, and <a href="http://www2.ift.ulaval.ca/~chaib/publications.html">Brahim Chaib-Draa</a> are six professors at Laval University that have contributed significantly to top AI conferences.</p>
<p>More than 75 companies in Quebec City use AI in their operations. These companies are operating in diverse sectors including health, security, insurance, management, marketing, and entertainment. The AI expertise in industries is also diversified and includes machine learning, sensors, and automation.</p>
<p>The current number of AI experts in Quebec City is not enough to fulfill the industry needs. For that reason, a new Professional Master’s Program in Artificial Intelligence will start this fall at Laval University. This program will combine fundamental AI courses with internships to help train 35 to 50 students in AI each year.</p>
<h2 id="anamecoveoa4coveomachinelearningextensiblemachinelearningplatformforpersonalizedpredictions"><a name="coveo"></a> 4. Coveo Machine Learning: Extensible Machine Learning Platform for Personalized Predictions</h2>
<p><em><strong>Sébastien Paquet</strong>, Team Lead - Data Analysis, Coveo</em></p>
<p><a href="https://www.coveo.com/">Coveo</a> is a provider of search engines for businesses. Founded in 2005, the company currently has more than 300 employees. Their products are used by thousands of companies in 18 different languages.</p>
<p>Machine learning is central to Coveo's products. Their machine learning models can suggest search queries, optimize search results, make recommendations based on navigation history, and detect important terms in queries. Their infrastructure is 100% automated to train machine learning models and select hyper-parameters.</p>
<p>Coveo uses machine learning to improve search results. Consider a user that searches for a computer mouse with a serial number that does not exist in the system. If the search engine is uniquely based on keyword matching, no results would be returned for that search query. With machine learning, the system can learn from the queries of that user and the results he clicked. As more users search for the same serial number, the machine learning model can learn to match it with the corresponding mouse product, and thus improve the search over time.</p>
<p>Coveo's machine learning models can make personalized predictions. When a user is typing a query, their machine learning models can suggest a set of candidate queries to complete the prefix entered by the user. Often, the machine learning model only has a few input characters to make a prediction. To improve the suggestions, they use a clustering approach to group users into different clusters prior to the query suggestion. Hence a salesforce and an anonymous users could get different query suggestions for the same prefix.</p>
<p>Coveo has different machine learning projects in progress, including chatbots and e-commerce recommendations.</p>
<h2 id="anamegartnera5howaihelpsgartnerknowalmosteverythingonthejobmarketworldwide"><a name="gartner"></a> 5. How AI Helps Gartner Know (Almost) Everything on the Job Market Worldwide</h2>
<p><em><strong>Andriy Burkov</strong>, Global ML Team Leader, Gartner</em></p>
<p><a href="https://www.gartner.com/">Gartner</a> is a research and advisory firm that provides insights, advice, and tools to business leaders worldwide. Wanted Technologies, a provider of talent recruitment tools, was acquired by Corporate Executive Board (CEB) in 2015, which was later acquired by Gartner in 2017. Following the work of Wanted Technologies, Gartner uses artificial intelligence to help human resources with talent recruitment.</p>
<p>Artificial intelligence has many applications in talent recruitment. Gartner uses artificial intelligence to predict the ideal profile for a job, find candidates that match a specific profile, and estimate job salaries. Their AI models can predict candidate specific information such as whether the candidate is ready to leave his current employment and if he will stay long in the company. Their predictions can also help human resources estimate the duration of the recruitment process.</p>
<p>Gartner has an automated AI pipeline to harvest, normalize, and index job data. Their crawlers download millions of job postings from different websites. They use machine learning to extract job posting attributes like company names, locations, salaries, occupations, and dates. They can also detect duplicate posts or posts that refer to multiple jobs.</p>
<p>Gartner uses machine learning and natural language processing for salary extraction. They use a binary classification model to predict if a number is a salary or not. When a number is predicted as a salary, they divide the phrase into tokens for further analysis. For instance, in <em>$2k monthly</em>, the <em>$</em> symbol is the currency, <em>2k</em> is the amount and means 2000, and <em>monthly</em> is the period. Hence, 2000 should be multiplied by 12 to become a yearly salary.</p>
<p>Gartner's machine learning team is planning to work on different projects in the future, such as crawlers that can automatically understand website structures to find specific information, and a writing assistant for job postings.</p>
<h2 id="anamekhourya6healthandtoxicityinonlineconversations"><a name="khoury"></a> 6. Health and Toxicity in Online Conversations</h2>
<p><em><strong>Richard Khoury</strong>, Professor, Department of Computer Science and Software Engineering, Laval University</em></p>
<p>Toxicity is present in online conversations. Some individuals use online media to manipulate public opinion, propagate racist and sexist ideologies, and encourage suicide. Groups like ISIS use social media to expand their influence and recruit people. And with the massive volume of online messages, it becomes impossible for human moderators to monitor everything.</p>
<p>Artificial Intelligence can help with online toxicity. The goal is not to monitor everything, but rather to identify the individuals that should be monitored. One possible approach is to model the personality of online users, and then predict which types of personality are more likely to become toxic. Human moderators can then focus their efforts on users with a potentially toxic personality.</p>
<p>Traditional approach to measure personalities are challenging to apply online. Personality models like the Big Five and the Dark Triad can be used to describe personalities. However, they are typically measured by answering long questionnaires. The user could simply refuse to answer the questionnaire or answer dishonestly.</p>
<p>Alternatively, artificial intelligence can be used to predict user personality from online messages. Richard Khoury and his team did an experiment with 899 twitter accounts that were selected randomly to predict personality traits from Twitter messages. Their results suggested for instance that personalities with high psychopathy and Machiavellism were associated with glorifying and aggressive messages.</p>
<p>For their next experiments, Richard Khoury and his team plan to use more users, diversify their sources, and work with psychologists to define toxicity rating metrics.</p>
<h2 id="anamescaleaia7scaleaiaipoweredsupplychains"><a name="scaleai"></a> 7. SCALE.AI : AI-Powered Supply Chains</h2>
<p><em><strong>Louis Roy</strong>, President and Founder, Optel Group</em></p>
<p>Supply chains are the sequences of processes involved in moving a product from a supplier to a consumer. They include the sourcing, manufacturing, distribution, and delivery of products and services. <a href="https://aisupplychain.ca">SCALE.AI</a> (Supply Chains and Logistics Excellence.AI) is an industry-led consortium that aims to use AI technologies to create an intelligent supply chain platform.</p>
<p>Supply chains are crucial to our society. They contribute to the transformation of natural resources into finished products and create a massive amount of jobs along the way. But supply chains are not perfect. We currently consume in six months what the earth can produce in a year. If we do not change our current supply chains and stop wasting natural resources, what are the impacts for our planet?</p>
<p>Technology can improve supply chain processes in many ways. Manufacturers can use technology to optimize their stocks and minimize their losses. Companies can improve their competitiveness by automating different processes. Blockchain can trace products back to their sources and improve transaction security. With the massive amount of data created by supply chains, artificial intelligence and other technologies can thus contribute to the improvement of supply chain activities.</p>
<h2 id="anamelalondea8artificialintelligenceandaugmentedreality"><a name="lalonde"></a>8. Artificial Intelligence and Augmented Reality</h2>
<p><em><strong>Jean-François Lalonde</strong>, Professor, Department of Computer Science and Software Engineering, Laval University</em></p>
<p>Augmented reality is a combination of a real world and a virtual environment. The perception of the real world environment is altered by the incorporation of illusions that are perceived as being part of the environment. For instance, augmented reality can simulate realistic situations to train surgeons, and help architects add virtual 3D models of objects in their designs. Currently, the two main challenges of augmented reality are to adapt illusions to the movement of real-world objects and to the lighting conditions of the real-world environment.</p>
<p>Mathieu Garon in collaboration with Creaform worked on the real-time tracking of objects for augmented reality. Consider the problem of creating an illusion to make a red toy dragon appear blue in a real-world environment. As the dragon moves, the blue illusion must move as well to avoid showing any red on the dragon. Assuming that the 3D model of the dragon is known, the goal is to train a neural network to predict the change in position and orientation of the dragon at each time step. During prediction time, the network receives as input the current and previous images, the previous  position, and the previous orientation. However, by using only these inputs, small prediction errors will accumulate at each time step and the predicted position of the object will eventually diverge from the real one. For that reason, the neural network must also use the predicted change in position and orientation as input. That way, the network can learn to correct its predictions and stop the error propagation.</p>
<p>Virtual objects that are illuminated with the real lighting conditions can better integrate with the real world environment. When the lighting conditions in an image are known, they can be used to estimate the illumination that must be applied on the virtual object. Marc-André Gardner worked in collaboration with Adobe to estimate lighting conditions in images <a href="#ref-gardner2017learning">(Gardner, 2017)</a>. His team used a large dataset of outdoor panoramas to train a convolutional neural network for this task. By using a panorama, they were able to obtain the ground truth of the lighting conditions. They then cut the panoramas into smaller images and trained the network with these images. At prediction time, they were able to predict lighting conditions for new images and insert virtual objects that appeared realistic.</p>
<h2 id="anamecardinala9towardsintelligentnanoscopyartificialintelligenceapplicationsinthestudyofbrainmolecularmechanisms"><a name="cardinal"></a> 9. Towards Intelligent Nanoscopy: Artificial Intelligence Applications in the Study of Brain Molecular Mechanisms</h2>
<p><em><strong>Flavie Lavoie-Cardinal</strong>, Researcher, CERVO, Laval University</em></p>
<p>Flavie Lavoie-Cardinal studies the molecular interactions in living cells. These interactions include the communication between neurons and the evolution and changes in synapses. With a super-resolution optical microscope, living cells can be observed in real time. However, the quality of images can vary greatly depending on the microscope parameters and the structure of interest. Since non-experts have difficulty to judge the quality of super-resolution images, selecting the best parameters for this microscope is highly challenging.</p>
<p>Artificial intelligence can help non-experts optimize a super-resolution optical microscope to obtain high-quality images. Flavie Lavoie-Cardinal and her team used a convolutional neural network (CNN) to learn to predict the quality of super-resolution images <a href="#ref-robitaille2018learning">(Robitaille, 2018)</a>. They modeled the problem as a regression problem, where the CNN is given an input image and outputs a real-valued score between zero and one that represents the quality of the image. The CNN was trained on a set of high-resolution images that were labeled by an expert, and now runs automatically during their experiments to assist non-experts in predicting the quality of images. Non-experts can thus use the feedback of the CNN to optimize the parameters of the microscope and obtain high-quality images.</p>
<h2 id="anamethalesa10realtimeanalysisofdecisionpatternsandbiobehavioraldataforaugmentedhumanmachinesystems"><a name="thales"></a> 10. Real Time Analysis of Decision Patterns and Bio-Behavioral Data for Augmented Human-Machine Systems</h2>
<p><em><strong>Daniel Lafond</strong>, Specialist in Cognitive Engineering and Human Factors, Thales</em></p>
<p><a href="https://www.thalesgroup.com">Thales</a> provides services for the avionics, defense, security, aerospace, and transportation markets. Founded in 2000, Thales currently has 61,000 employees and operates in 56 countries. Thales Quebec is their fifth technology research center and focuses on human, data, and IoT.</p>
<p>Thales is working on the real-time detection of critical states in humans. Workers like pilots or truck drivers must be extremely vigilant during their work. Their performance can be affected by stress, mental charge, and tiredness. With biosensors and artificial intelligence, workers can be monitored during their activities and critical states can be inferred from data. However, critical state classifiers have some limitations. Thales observed a significant drop in their classifier performances when they were not tested on the same individuals as they were trained on. Hence, the classifiers must currently be calibrated on each individual to perform well.</p>
<p>Thales also uses interpretable machine learning models to understand and support human decisions. Since human judgment can be affected by different factors, a statistical model of a doctor can be better than the doctor himself at making certain decisions. Expertise can also be hard to communicate. Hence, learning a statistical model from the data can be easier than developing an expert system. As an example, Joanny Grenier worked on understanding doctor decisions for the detection of sepsis in patients <a href="#ref-grenier2016processus">(Grenier, 2016)</a>. Sepsis is a dangerous disease where symptoms can vary highly among patients. She modeled the decision process of individual doctors by learning a decision tree on past decisions and constraining the model to use only the information consulted by the doctor. The trained doctor model can then be compared to a collective model with a better performance to understand in which cases the decision process differs.</p>
<h2 id="anamepanela11panelartificialintelligenceandindustry40inquebec"><a name="panel"></a> 11. Panel : Artificial Intelligence and Industry 4.0 in Quebec</h2>
<p><em>Moderator : <strong>Alexandre Vallières</strong>, Cofounder, AIworx</em></p>
<p><em><strong>Jonathan Gaudreault</strong>, Director, Research Consortium in Engineering and Industrial Systems 4.0, Laval University</em><br>
Following steam power, electricity, and computer automation, technologies like AI and IoT are giving rise to the fourth industrial revolution. AI can help the manufacturing sector by supporting human decisions, detecting causes of manufacturing problems, and making predictions for different scenarios. For example, a manufacturer of Nespresso machines used a neural network to predict the optimal configuration of the lacquering process given the meteorological conditions. These predictions decreased the percentage of parts that had to be reprocessed from 60% to 5%. Hence, manufacturers can greatly benefit from adding AI technologies into their operations.</p>
<p><em><strong>Yves Proteau</strong>, Codirector, APN</em><br>
<a href="http://www.apnca.com">APN</a> is a machine shop that transforms metal into high precision products. APN uses AI to predict product measurements, plan their production optimally, and predict when their machines should be maintained. They also have a robot that uses AI to avoid workers on the floor. AI has many applications in their industry, and collecting the right data is key to the success of their AI technologies.</p>
<p><em><strong>Sébastien Bujold</strong>, Analyst - Production Systems, Aluminerie Alouette</em><br>
<a href="http://www.alouette.com">Aluminerie Alouette</a> is an aluminum manufacturing company that was founded in 1989. The company is currently based in Sept-Îles and produces approximately 600 000 metric tones of aluminum per year. Aluminerie Alouette is using artificial intelligence to improve their electrolysis process. They trained a convolution neural network on time series data to detect anode effects and instabilities during the electrolysis process. With their trained network, they can ensure continuous monitoring and early detection of these problems to minimize their production losses.</p>
<p><em><strong>Kevin Spahr</strong>, Physician and Data Analyst, LeddarTech</em><br>
<a href="https://leddartech.com/">LeddarTech</a> specializes in the development of Light Detection and Ranging (LIDAR) sensors. A LIDAR creates a point cloud (i.e. a set of points in a 3D space) that measures the distance to a target area. The point cloud is created by sending light pulses on the target and capturing the light that was reflected. LIDARs have many applications in self-driving cars. They can for instance be used to detect pedestrians, obstacles, and objects in a given area.</p>
<h2 id="references">References</h2>
<div id="refs" class="references">
<div id="ref-ajakan2014domain">
<p>Ajakan, Hana, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. 2014. “Domain-Adversarial Neural Networks.” <em>ArXiv Preprint ArXiv:1412.4446</em>.</p>
</div>
<div id="ref-bengio2003neural">
<p>Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. “A Neural Probabilistic Language Model.” <em>Journal of Machine Learning Research</em> 3 (Feb): 1137–55.</p>
</div>
<div id="ref-gardner2017learning">
<p>Gardner, Marc-André, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagné, and Jean-François Lalonde. 2017. “Learning to Predict Indoor Illumination from a Single Image.” <em>ACM Transactions on Graphics (SIGGRAPH Asia)</em> 9 (4).</p>
</div>
<div id="ref-goodfellow2014explaining">
<p>Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” <em>ArXiv Preprint ArXiv:1412.6572</em>.</p>
</div>
<div id="ref-goodfellow2014generative">
<p>Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In <em>Advances in Neural Information Processing Systems</em>, 2672–80.</p>
</div>
<div id="ref-grenier2016processus">
<p>Grenier, Joanny. 2016. “Processus décisionnel En Contexte de détection Du Sepsis Pédiatrique.” PhD thesis, Université Laval.</p>
</div>
<div id="ref-robitaille2018learning">
<p>Robitaille, Louis-Émile, Audrey Durand, Marc-André Gardner, Christian Gagné, Paul De Koninck, and Flavie Lavoie-Cardinal. 2018. “Learning to Become an Expert: Deep Networks Applied to Super-Resolution Microscopy.” <em>ArXiv Preprint ArXiv:1803.10806</em>.</p>
</div>
<div id="ref-zhou2014object">
<p>Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2014. “Object Detectors Emerge in Deep Scene Cnns.” <em>ArXiv Preprint ArXiv:1412.6856</em>.</p>
</div>
</div></div>]]></content:encoded></item></channel></rss>