Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • Guinness/Learning_Logic
  • poulot-cazajous/Learning_Logic
2 results
Show changes
Commits on Source (70)
Showing
with 6654 additions and 0 deletions
......@@ -5,3 +5,8 @@
*.nav
*.pdf
*.snm
*.blg
*.bbl
!report/articles/*.pdf
.DS_Store
*.synctex.gz
File added
\documentclass[]{beamer}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[francais]{babel}
\usepackage{siunitx}
% \usepackage[caption=false]{subfig}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{etoolbox}
\usepackage{marvosym} %scorpion, capricorn
\newcommand{\scorp}{\text{\Scorpio}}
\newcommand{\capri}{\text{\Capricorn}}
\usepackage{cancel} % Barer des équations
\usepackage{tikz}
\usetikzlibrary{shapes.geometric,arrows}
\usetikzlibrary{mindmap,backgrounds,positioning}
\usetikzlibrary{automata,positioning,fit,backgrounds}
\usetikzlibrary{positioning,arrows}
\usetikzlibrary{matrix,chains,positioning,decorations.pathreplacing,arrows}
\usepackage{pgfplots}
% \usepackage{subcaption}
\usetheme{Warsaw}
% \usepackage{aas_macros}
\graphicspath{{pictures/}} % on change la racine des images
\usepackage{thmtools}
% \declaretheorem[name=Definition]{definition}
% \declaretheorem[name=Theorem]{theorem}
% \declaretheorem[name=Lemma,sibling=theorem]{lemma}
% \usetheme{Ilmenau}
% \usecolortheme{seahorse}
% \usepackage{apacite}
% Thème général du diaporama - quasi obligatoire
% \usetheme{Malmoe}
% \usetheme{Copenhagen}
% Nous verrons apres ce que cela veut dire
% \usecolortheme[named=blue]{structure}
% \usepackage[citestyle=verbose]{biblatex}
% \setbeamercolor{itemize item}{fg=white!80!black}
% \setbeamercolor{itemize subitem}{fg=white!80!black}
\AtBeginSection[]
{
\begin{frame}<beamer>
\frametitle{Plan}
\tableofcontents[currentsection]
\end{frame}
}
\defbeamertemplate*{footline}{shadow theme}{%
\leavevmode%
\hbox{\begin{beamercolorbox}[wd=.5\paperwidth,ht=2.5ex,dp=1.125ex,leftskip=.3cm plus1fil,rightskip=.3cm]{author in head/foot}%
\usebeamerfont{author in head/foot}\hfill\insertshortauthor
\end{beamercolorbox}%
\begin{beamercolorbox}[wd=.5\paperwidth,ht=2.5ex,dp=1.125ex,leftskip=.3cm,rightskip=.3cm plus1fil]{title in head/foot}%
\usebeamerfont{title in head/foot}\insertshorttitle\hfill%
\insertframenumber\,/\,\inserttotalframenumber
\end{beamercolorbox}}%
\vskip0pt%
}
% \addtobeamertemplate{navigation symbols}{}{%
% \usebeamerfont{footline}%
% \usebeamercolor[fg]{footline}%
% \hspace{1em}%
% \insertframenumber/\inserttotalframenumber
% }
\title[Projet Réseau]{Présentation du Projet Réseau}
\author{Colisson Léo \and Jeanmaire Paul}
\institute{École Normale Supérieure Paris Saclay, Département Informatique \\Initiation à la recherche}
\begin{document}
% ==================================================
\begin{frame}
\titlepage
\end{frame}
% ==================================================
\begin{frame}
\tableofcontents
\end{frame}
% ==================================================
\section{Réseau de neurone}
% TODO : Paul
\section{Rétro-propagation du gradient}
% TODO : Léo
\begin{frame}[fragile]
\frametitle{Rétro-propagation du gradient}
\begin{figure}[H]
\begin{tikzpicture}[
plain/.style={
draw=none,
fill=none,
},
net/.style={
matrix of nodes,
nodes={
draw,
circle,
inner sep=10pt
},
nodes in empty cells,
column sep=2cm,
row sep=-9pt
},
>=latex
]
\matrix[net] (mat)
{
|[plain]| \parbox{1cm}{} & |[plain]| \parbox{1cm}{} & |[plain]| \parbox{1cm}{} \\
& |[plain]| \\
|[plain]| & \\
& |[plain]| \\
|[plain]| & |[plain]| \\
& & \\
|[plain]| & |[plain]| \\
& |[plain]| \\
|[plain]| & \\
& |[plain]| \\
};
\foreach \ai [count=\mi ]in {2,4,...,10}
\draw[<-] (mat-\ai-1) -- node[above] {Input \mi} +(-2cm,0);
\foreach \ai in {2,4,...,10}
{\foreach \aii in {3,6,9}
\draw[->] (mat-\ai-1) -- (mat-\aii-2);
}
\foreach \ai in {3,6,9}
\draw[->] (mat-\ai-2) -- (mat-6-3);
\draw[->] (mat-6-3) -- node[above] {Ouput} +(2cm,0);
\end{tikzpicture}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Rétro-propagation du gradient}
\begin{block}{Cas général : algorithme du gradient}
Minimiser une fonction $f(x)$ via l'algorithme du gradient :
\begin{enumerate}
\item Calcul du gradient $\nabla f(x_k)$
\item On s'arrête si $\nabla f(x_k) \leq \varepsilon$
\item On calcule $x_{k+1} = x_k - \lambda \nabla f(x_k)$ pour un $\lambda$ bien choisit
\item On recommence
\end{enumerate}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Rétro-propagation du gradient}
\begin{exampleblock}{Cas des neurones}
\begin{itemize}
\item $\mathcal{N}_\omega(x) =$ fonction issue du réseau de neurone ($\omega = $ poids, $x = $ entrées)
\item $\mathcal{E}_x = $ valeur idéale de $\mathcal{N}_\omega$ dans la base d'entraînement.
\end{itemize}
But : trouver ``les meilleurs poids'' $\Rightarrow$ minimiser pour un $x$ fixé la fonction
$$f: \omega \rightarrow \mathcal{N}_\omega(x) - \mathcal{E}_x$$
Problème : on ne sait pas calculer $\nabla f$ en un seul bloc
Solution : rétro-propagation locale.
\end{exampleblock}
\end{frame}
\begin{frame}[fragile]
\frametitle{Rétro-propagation du gradient}
\begin{figure}[H]
\begin{tikzpicture}[
plain/.style={
draw=none,
fill=none,
},
net/.style={
matrix of nodes,
nodes={
draw,
circle,
inner sep=10pt
},
nodes in empty cells,
column sep=2cm,
row sep=-9pt
},
>=latex
]
\matrix[net] (mat)
{
|[plain]| \parbox{1cm}{} & |[plain]| \parbox{1cm}{} & |[plain]| \parbox{1cm}{} \\
& |[plain]| \\
|[plain]| & \\
& |[plain]| \\
|[plain]| & |[plain]| \\
& & \\
|[plain]| & |[plain]| \\
& |[plain]| \\
|[plain]| & \\
& |[plain]| \\
};
\foreach \ai [count=\mi ]in {2,4,...,10}
\draw[<-] (mat-\ai-1) -- node[above] {Input \mi} +(-2cm,0);
\foreach \ai in {2,4,...,10}
{\foreach \aii in {3,6,9}
\draw[<-] (mat-\ai-1) -- (mat-\aii-2);
}
\foreach \ai in {3,6,9}
\draw[<-] (mat-\ai-2) -- (mat-6-3);
\draw[<-] (mat-6-3) -- node[above] {Ouput} +(2cm,0);
\end{tikzpicture}
\end{figure}
\end{frame}
\section{Graph Neural Network (GNN)}
\begin{frame}
\frametitle{Graph Neural Network (GNN)}
\begin{block}{GNN}
\begin{itemize}
\item Idée : pouvoir appliquer un réseau de neurone sur un graph
\end{itemize}
\end{block}
\end{frame}
\tikzstyle{vertex}=[circle,fill=black!25,minimum size=20pt,inner sep=0pt]
\tikzstyle{selected vertex} = [vertex, fill=red!24]
\tikzstyle{edge} = [draw,thick,-]
\tikzstyle{weight} = [font=\small]
\tikzstyle{selected edge} = [draw,line width=5pt,-,red!50]
\tikzstyle{ignored edge} = [draw,line width=5pt,-,black!20]
\begin{frame}
\frametitle{Graph Neural Network (GNN)}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.8, auto,swap]
% Draw a 7,11 network
% First we draw the vertices
\foreach \pos/\name/\nname in {{(0,2)/a/45}, {(2,1)/b/12}, {(4,1)/c/8},
{(0,0)/d/16}, {(3,0)/e/1}, {(2,-1)/f/7}, {(4,-1)/g/-5}}
\node[vertex] (\name) at \pos {$\nname$};
% Connect vertices with edges and draw weights
\foreach \source/ \dest /\weight in {b/a/7, c/b/8,d/a/5,d/b/9,
e/b/7, e/c/5,e/d/15,
f/d/6,f/e/8,
g/e/9,g/f/11}
\path[edge] (\source) -- node[weight] {$\weight$} (\dest);
% % Start animating the vertex and edge selection.
% \foreach \vertex / \fr in {d/1,a/2,f/3,b/4,e/5,c/6,g/7}
% \path<\fr-> node[selected vertex] at (\vertex) {$\vertex$};
% For convenience we use a background layer to highlight edges
% This way we don't have to worry about the highlighting covering
% weight labels.
% \begin{pgfonlayer}{background}
% \pause
% \foreach \source / \dest in {d/a,d/f,a/b,b/e,e/c,e/g}
% \path<+->[selected edge] (\source.center) -- (\dest.center);
% \foreach \source / \dest / \fr in {d/b/4,d/e/5,e/f/5,b/c/6,f/g/7}
% \path<\fr->[ignored edge] (\source.center) -- (\dest.center);
% \end{pgfonlayer}
\end{tikzpicture}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Graph Neural Network (GNN)}
\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=1.8, auto,swap]
% Draw a 7,11 network
% First we draw the vertices
\foreach \pos/\name/\nname in {{(0,2)/a/45}, {(2,1)/b/12}, {(4,1)/c/8}, {(3,0)/e/1}, {(2,-1)/f/7}, {(4,-1)/g/-5}}
\node[vertex] (\name) at \pos {$\nname$};
\node[selected vertex] (d) at (0,0) {$f_\omega$};
% Connect vertices with edges and draw weights
\foreach \source/ \dest /\weight in {b/a/7, c/b/8,d/a/5,d/b/9,
e/b/7, e/c/5,e/d/15,
f/d/6,f/e/8,
g/e/9,g/f/11}
\path[edge] (\source) -- node[weight] {$\weight$} (\dest);
% % Start animating the vertex and edge selection.
% \foreach \vertex / \fr in {d/1,a/2,f/3,b/4,e/5,c/6,g/7}
% \path<\fr-> node[selected vertex] at (\vertex) {$\vertex$};
% For convenience we use a background layer to highlight edges
% This way we don't have to worry about the highlighting covering
% weight labels.
% \begin{pgfonlayer}{background}
% \pause
% \foreach \source / \dest in {d/a,d/f,a/b,b/e,e/c,e/g}
% \path<+->[selected edge] (\source.center) -- (\dest.center);
% \foreach \source / \dest / \fr in {d/b/4,d/e/5,e/f/5,b/c/6,f/g/7}
% \path<\fr->[ignored edge] (\source.center) -- (\dest.center);
% \end{pgfonlayer}
\end{tikzpicture}
\end{figure}
\end{frame}
\begin{frame}
\frametitle{Graph Neural Network (GNN)}
\begin{block}{Idée}
\begin{itemize}
\item $f_\omega$ est une contraction $\Rightarrow $ itération converge (Théorème de Banach).
\item On peut par exemple encoder $f_\omega$ dans un réseau de neurone récurrent.
\item Apprentissage : on chaîne les $f_\omega$, puis propagation du gradient.
\end{itemize}
\end{block}
\end{frame}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End:
File added
DEFAULT:
pdflatex plan.tex
\documentclass[a4paper, 11pt]{article}
\usepackage{amsmath, amsfonts, amssymb}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\author{\textsc{Grienenberger, Kamtue, Girardot, Kozolinsky},\\\textsc{Poulot-Cazajous, Colisson, Jeanmaire, Oudin}}
\title{Brouillon de plan}
\date{\today}
\begin{document}
\maketitle
\section*{Introduction}
\section{État de l'art des SAT Solvers}
Emilie, Kawisorn, Xavier, Rémi\\
Les SAT solvers utilisent de l'apprentissage, mais ce n'est pas de
l'apprentissage profond. Mais qu'est-ce que de l'apprentissage profond, et
comment l'utiliser dans le SAT solving.
\section{Deep Learning~: Qu'est-ce qu'un réseau de neurones}
Paul et Léo\\
Définition d'un réseau de neurones, différence deep learning et shallow
learning
\section{SAT Solving avec des réseaux de neurones}
Johan, Jules\\
Comment appliquer le deep learning pour le SAT solving ?
\end{document}
File added
DEFAULT:
pdflatex report.tex
all:
pdflatex report.tex
bibtex report.aux
pdflatex report.tex
pdflatex report.tex
File added
File added
File added
File added
This diff is collapsed.
File added
File added
File added
File added
report/images/image_CSP.png

87.1 KiB

report/images/image_SAT.png

63.1 KiB

report/images/reseau_de_neurone.png

58.9 KiB