Lecture Note 2
Tensorboard P3
Data Structures P4
Math Operations P6
Data Types P7
tf native && python native
tensorflow && numpy P9
Variables P10-14
var要先initiate/assign
placeholder P15-16
Lecture Note 3
An example of logitic regression P3
- How to define a los function? P4-6
- tf.data 导入数据用 P6-9
- Optimizer P9-13
- eg: logistic on MNIST P14
Lecture Note 4
Eager:方便在Python中使用TensorFlow
eg: ppt P19-P23 不用再tf.session.run
自动求导 P25-28
与传统tf命令的区别 P32
usage P37
Assignment 1
1. Commonly used tensorflow operations
1 """ 2 Simple exercises to get used to TensorFlow API 3 You should thoroughly test your code. 4 TensorFlow's official documentation should be your best friend here 5 CS20: "TensorFlow for Deep Learning Research" 6 cs20.stanford.edu 7 Created by Chip Huyen (chiphuyen@cs.stanford.edu) 8 """ 9 import os 10 os.environ['TF_CPP_MIN_LOG_LEVEL']='2' 11 12 import tensorflow as tf 13 14 sess = tf.InteractiveSession() 15 ############################################################################### 16 # 1a: Create two random 0-d tensors x and y of any distribution. 17 # Create a TensorFlow object that returns x + y if x > y, and x - y otherwise. 18 # Hint: look up tf.cond() 19 # I do the first problem for you 20 ############################################################################### 21 22 x = tf.random_uniform([]) # Empty array as shape creates a scalar. 23 y = tf.random_uniform([]) 24 out = tf.cond(tf.greater(x, y), lambda: x + y, lambda: x - y) 25 print(sess.run(out)) 26 27 ############################################################################### 28 # 1b: Create two 0-d tensors x and y randomly selected from the range [-1, 1). 29 # Return x + y if x < y, x - y if x > y, 0 otherwise. 30 # Hint: Look up tf.case(). 31 ############################################################################### 32 33 # YOUR CODE 34 x = tf.random_uniform([]) 35 y = tf.random_uniform([]) 36 xdy = lambda x,y: x-y 37 xpy = lambda x,y: x+y 38 res = tf.case({tf.less(x,y): lambda: xpy(x,y), tf.greater(x,y): lambda: xdy(x,y)}, default=lambda: 0.00, exclusive=True) 39 print(sess.run(res)) 40 41 ############################################################################### 42 # 1c: Create the tensor x of the value [[0, -2, -1], [0, 1, 2]] 43 # and y as a tensor of zeros with the same shape as x. 44 # Return a boolean tensor that yields Trues if x equals y element-wise. 45 # Hint: Look up tf.equal(). 46 ############################################################################### 47 48 # YOUR CODE 49 x = tf.constant([[0, -2, -1], [0, 1, 2]]) 50 y = tf.zeros_like(x) 51 res = tf.equal(x,y) 52 print(sess.run(res)) 53 54 ############################################################################### 55 # 1d: Create the tensor x of value 56 # [29.05088806, 27.61298943, 31.19073486, 29.35532951, 57 # 30.97266006, 26.67541885, 38.08450317, 20.74983215, 58 # 34.94445419, 34.45999146, 29.06485367, 36.01657104, 59 # 27.88236427, 20.56035233, 30.20379066, 29.51215172, 60 # 33.71149445, 28.59134293, 36.05556488, 28.66994858]. 61 # Get the indices of elements in x whose values are greater than 30. 62 # Hint: Use tf.where(). 63 # Then extract elements whose values are greater than 30. 64 # Hint: Use tf.gather(). 65 ############################################################################### 66 67 # YOUR CODE 68 x=tf.constant([[29.05088806, 27.61298943, 31.19073486, 29.35532951], [30.97266006, 26.67541885, 38.08450317, 20.74983215], [34.94445419, 34.45999146, 29.06485367, 36.01657104], [27.88236427, 20.56035233, 30.20379066, 29.51215172], [33.71149445, 28.59134293, 36.05556488, 28.66994858]]) 69 h1=tf.where(tf.greater(x, 30)) 70 print(sess.run(h1)) 71 h2=tf.gather_nd(x,h1) 72 print(sess.run(h2)) 73 74 ############################################################################### 75 # 1e: Create a diagnoal 2-d tensor of size 6 x 6 with the diagonal values of 1, 76 # 2, ..., 6 77 # Hint: Use tf.range() and tf.diag(). 78 ############################################################################### 79 80 # YOUR CODE 81 ran=tf.range(1,7,1) 82 dig=tf.diag(ran) 83 print(sess.run(dig)) 84 85 ############################################################################### 86 # 1f: Create a random 2-d tensor of size 10 x 10 from any distribution. 87 # Calculate its determinant. 88 # Hint: Look at tf.matrix_determinant(). 89 ############################################################################### 90 91 # YOUR CODE 92 x = tf.random_uniform((10,10)) 93 res=tf.matrix_determinant(x) 94 print(sess.run(res)) 95 96 ############################################################################### 97 # 1g: Create tensor x with value [5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9]. 98 # Return the unique elements in x 99 # Hint: use tf.unique(). Keep in mind that tf.unique() returns a tuple. 100 ############################################################################### 101 102 # YOUR CODE 103 x=tf.constant([5, 2, 3, 5, 10, 6, 2, 3, 4, 2, 1, 1, 0, 9]) 104 y, idx=tf.unique(x) 105 print(sess.run(y)) 106 107 ############################################################################### 108 # 1h: Create two tensors x and y of shape 300 from any normal distribution, 109 # as long as they are from the same distribution. 110 # Use tf.cond() to return: 111 # - The mean squared error of (x - y) if the average of all elements in (x - y) is negative, or 112 # - The sum of absolute value of all elements in the tensor (x - y) otherwise. 113 # Hint: see the Huber loss function in the lecture slides 3. 114 ############################################################################### 115 116 # YOUR CODE 117 x = tf.random_normal([300]) 118 y = tf.random_normal([300]) 119 res=tf.cond(tf.reduce_mean(x-y)<0, lambda: tf.reduce_mean(tf.square(x-y)), lambda: tf.reduce_sum(tf.abs(x-y))) 120 print(sess.run(res))