博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
人工智能资料库:第15辑(20170124)
阅读量:2439 次
发布时间:2019-05-10

本文共 5394 字,大约阅读时间需要 17 分钟。


  1. 【博客】Nuts and Bolts of Applying Deep Learning

简介:

This weekend was very hectic (catching up on courses and studying for a statistics quiz), but I managed to squeeze in some time to watch the  livestream on YouTube. For those of you wondering what that is, BADLS is a 2-day conference hosted at Stanford University, and consisting of back-to-back presentations on a variety of topics ranging from NLP, Computer Vision, Unsupervised Learning and Reinforcement Learning. Additionally, top DL software libraries were presented such as Torch, Theano and Tensorflow.

There were some super interesting talks from leading experts in the field:  from Twitter,  from OpenAI,  from the Université de Montreal, and  from Baidu to name a few. Of the plethora of presentations, there was one somewhat non-technical one given by Andrew that really piqued my interest.

In this blog post, I’m gonna try and give an overview of the main ideas outlined in his talk. The goal is to pause a bit and examine the ongoing trends in Deep Learning thus far, as well as gain some insight into applying DL in practice.

原文链接:


2.【论文&代码】Value Iteration Networks

简介:

We introduce the value iteration network (VIN): a fully differentiable neural network with a ‘planning module’ embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning omputation, VIN policies generalize better to new, unseen domains.

原文链接:

代码链接:


3.【博客】Understanding the backward pass through Batch Normalization Layer

简介:

At the moment there is a wonderful course running at Standford University, called , held by Andrej Karpathy, Justin Johnson and Fei-Fei Li. Fortunately all the  is provided for free and all the lectures are recorded and uploaded on . This class gives a wonderful intro to machine learning/deep learning coming along with programming assignments.

原文链接:


4.【博客&代码】Generating Abstract Patterns with TensorFlow

简介:

In recent neural network based image generation techniques, typically the generator network will attempt to draw the entire image at once. For example, if the desired resolution of an output image is 256x256

, then the final layer of the neural network will have 65536
 values for a black and white image. It may not be feasible to increase the output resolution to modern image resolutions (2880x1800
) due to limitations of memory and scalability these algorithms.

In this post I will describe a really simple way to get neural nets to generate a high resolution image. Rather than generating every pixel at once, the neural network’s job is to generate an individual pixel’s intensity or colour, given the location of the pixel. Then, the entire image can be generated by just querying the network once for each pixel in the desired output image. This method can be used to generate very high resolution images, limited by memory, provides an elegant way to allow one to zoom in and out of images, and share some fractal like properties.

There has been some demos on the web experimenting this technique, including karpathy’s convnetjs , and javascript implementations of generative  and .

In this post I will describe how to implement this simple technique to generate random abstract art in TensorFlow. The reason I wanted to implement this in TensorFlow is so that it can be used as a base for more interesting work in the future that take advantage of TensorFlow’s machine learning capabilities, so that this type of generative network can be used to create non-random images as well, and I will explain this in future posts.

原文链接:


5.【博客&代码】LSTM NEURAL NETWORK FOR TIME SERIES PREDICTION

简介:

Neural Networks these days are the “go to” thing when talking about new fads in machine learning. As such, there’s a plethora of courses and tutorials out there on the basic vanilla neural nets, from simple tutorials to complex articles describing their workings in depth.

For deeper networks the obsession with image classification tasks seems to have also caused tutorials to appear on the more complex convolutional neural networks. This is great, if you’re into that sort of thing, for me however I’m not particularly enthused by classifying images. I am far more interested in data with timeframes. And this is where recurrent neural networks (RNNs) come in rather handy (and I’m guessing that by reading this article you’ll know that long short term memory, LSTM, networks are the most popular and useful variants of RNNs. If not, there’s  of   describing LSTMs out there you should probably check out first).

Now whilst there’s lots of public research papers and articles on LSTMs, what I’ve found is that pretty much all of these deal with the theoretical workings and maths behind them and the examples they give don’t really show predictive look-ahead powers of LSTMs in terms of a time series. Again, all great if you’re looking to know the intricate workings of LSTMs but not ideal if you just want to get something up and running.

原文链接:


转载地址:http://hpdqb.baihongyu.com/

你可能感兴趣的文章
福布斯:Sun下场本可避免 老CEO不听劝(转)
查看>>
根据什么选择一套适合自己的linux系统?(转)
查看>>
戴尔将在法国推出Linux笔记本(转)
查看>>
近9成盗版Office用户称愿投奔开源(转)
查看>>
MySQL购InnoDB不敌甲骨文宣布开放数据引擎(转)
查看>>
银行监会选红旗Linux建设公文传输系统(转)
查看>>
红旗支撑国家外汇管理局网上核销系统(转)
查看>>
网上交易中帐号和密码被盗的解决途径(转)
查看>>
Java线程总结(转)
查看>>
Java学习之类的属性(转)
查看>>
轻松搞定Java内存泄漏(转)
查看>>
Java学习之值传递(转)
查看>>
Java 范型攻略篇(转)
查看>>
linux中crontab命令(转)
查看>>
牛人请进 小弟跪求(转)
查看>>
Linux版本凌乱痛失市场(转)
查看>>
大家好,新学生。 请问怎么升级Redhat9.0 kernel 2.4.X-->2.6.18 的详细过程(转)
查看>>
FreeBSD6.1+无线+永中......桌面安装【附笔记】(转)
查看>>
adsl设置(转)
查看>>
Wii将有一个可升级的Linux操作系统(转)
查看>>