一、关于Tensors的一些判断
torch.is_tensor(obj)
torch.is_storage(obj)
torch.set_default_dtype(d) 默认的type为torch.float32
torch.get_default_dtype() → torch.dtype
torch.set_default_tensor_type(t)
torch.numel(input) → int 返回tensor中所有的元素个数
torch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None)
设置print的相关选项
| Parameters: |
|
|---|
torch.set_flush_denormal(mode) → bool
| Parameters: | mode (bool) – Controls whether to enable flush denormal mode or not |
|---|
二、创建tensor的一些方法
注意:张量的随机创建会在下面的random sampling里面再说明。
torch.tensor(data, dtype=None, device=None, requires_grad=False) → Tensor
torch.sparse_coo_tensor(indices, values, size=None, dtype=None, device=None, requires_grad=False)→ Tensor
torch.as_tensor(data, dtype=None, device=None) → Tensor
torch.from_numpy(ndarray) → Tensor
torch.zeros(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.zeros_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor
torch.ones(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.ones_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor
torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.range(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.linspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.logspace(start, end, steps=100, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.eye(n, m=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.empty(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.empty_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor
torch.full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
三、Indexing, Slicing, Joining, Mutating(转变/变化)
torch.cat(tensors, dim=0, out=None) → Tensor
torch.chunk(tensor, chunks, dim=0) → List of Tensors。在某一个维度将一个tensor分成几等份,chunks为int,即需要分成的份数
torch.gather(input, dim, index, out=None) → Tensor。Gathers values along an axis specified by dim.
torch.index_select(input, dim, index, out=None) → Tensor,类似于标准库slice函数的作用
torch.masked_select(input, mask, out=None) → Tensor
torch.narrow(input, dimension, start, length) → Tensor
torch.nonzero(input, out=None) → LongTensor,返回所有非零元素的位置索引,返回的是索引哦
torch.reshape(input, shape) → Tensor
torch.split(tensor, split_size_or_sections, dim=0)
torch.squeeze(input, dim=None, out=None) → Tensor,将维度=1的那个维度(即只包含一个元素的维度)去掉,即所谓的压榨
torch.stack(seq, dim=0, out=None) → Tensor
torch.t(input) → Tensor
torch.take(input, indices) → Tensor
torch.transpose(input, dim0, dim1) → Tensor
torch.unbind(tensor, dim=0) → seq
torch.unsqueeze(input, dim, out=None) → Tensor
torch.where(condition, x, y) → Tensor
四、Random sampling建立随机矩阵
torch.manual_seed(seed)torch.initial_seed()
torch.get_rng_state()
torch.set_rng_state(new_state)
torch.default_generator = <torch._C.Generator object>
torch.bernoulli(input, *, generator=None, out=None) → Tensor
torch.multinomial(input, num_samples, replacement=False, out=None) → LongTensor
torch.normal()
torch.normal(mean, std, out=None) → Tensor
torch.rand(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.rand_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor
torch.randint(low=0, high, size, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.randint_like(input, low=0, high, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
torch.randn(*sizes, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)→ Tensor
torch.randn_like(input, dtype=None, layout=None, device=None, requires_grad=False) → Tensor
torch.randperm(n, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False) → LongTensor
In-place random sampling
There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
-
torch.Tensor.bernoulli_()- in-place version oftorch.bernoulli() -
torch.Tensor.cauchy_()- numbers drawn from the Cauchy distribution -
torch.Tensor.exponential_()- numbers drawn from the exponential distribution -
torch.Tensor.geometric_()- elements drawn from the geometric distribution -
torch.Tensor.log_normal_()- samples from the log-normal distribution -
torch.Tensor.normal_()- in-place version oftorch.normal() -
torch.Tensor.random_()- numbers sampled from the discrete uniform distribution -
torch.Tensor.uniform_()- numbers sampled from the continuous uniform distribution
五、Serialization序列化
torch.save(obj, f, pickle_module=<module 'pickle' from '/private/home/soumith/anaconda3/lib/python3.6/pickle.py'>, pickle_protocol=2)
将一个tensor保存带磁盘
| Parameters: |
|
|---|
torch.load(f, map_location=None, pickle_module=<module 'pickle' from '/private/home/soumith/anaconda3/lib/python3.6/pickle.py'>)
六、Parallelism并行
torch.get_num_threads() → int
Gets the number of OpenMP threads used for parallelizing CPU operations
torch.set_num_threads(int)
Sets the number of OpenMP threads used for parallelizing CPU operations
七、设置是否可求梯度Locally disabling gradient computation
torch.no_grad()
torch.enable_grad()
torch.set_grad_enabled()
八、Math operations数学运算操作
8.1 逐点运算Pointwise Ops
torch.abs(input, out=None) → Tensor
torch.acos(input, out=None) → Tensor
torch.add()
torch.add(input, value, out=None)
torch.add(input, value=1, other, out=None)
torch.addcdiv(tensor, value=1, tensor1, tensor2, out=None) → Tensor。首先求tensor1除以tensor2,然后用得到的结果乘以value,然后再加到tensor上面去。
torch.addcmul(tensor, value=1, tensor1, tensor2, out=None) → Tensor。首先求tensor1乘以ensor2,然后用得到的结果乘以value,然后再加到tensor上面去。
torch.asin(input, out=None) → Tensor
torch.atan(input, out=None) → Tensor
torch.atan2(input1, input2, out=None) → Tensor
torch.ceil(input, out=None) → Tensor
torch.clamp(input, min, max, out=None) → Tensor。将tensor中所有小于min的数字用min代替,所有大于max的数字用max代替
torch.clamp(input, *, min, out=None) → Tensor
torch.clamp(input, *, max, out=None) → Tensor
torch.cos(input, out=None) → Tensor
torch.cosh(input, out=None) → Tensor
torch.div()
torch.div(input, value, out=None) → Tensor
torch.div(input, other, out=None) → Tensor
torch.digamma(input, out=None) → Tensor
torch.erf(tensor, out=None) → Tensor
torch.erfc(input, out=None) → Tensor
torch.erfinv(input, out=None) → Tensor
torch.exp(input, out=None) → Tensor
torch.expm1(input, out=None) → Tensor
torch.floor(input, out=None) → Tensor
torch.fmod(input, divisor, out=None) → Tensor
torch.frac(input, out=None) → Tensor
torch.lerp(start, end, weight, out=None)
torch.log(input, out=None) → Tensor
torch.log10(input, out=None) → Tensor
torch.log1p(input, out=None) → Tensor
torch.log2(input, out=None) → Tensor
torch.mul()
torch.mul(input, value, out=None)
torch.mul(input, other, out=None)
torch.mvlgamma(input, p) → Tensor
torch.neg(input, out=None) → Tensor
torch.pow()
torch.pow(input, exponent, out=None) → Tensor
torch.pow(base, input, out=None) → Tensor
torch.reciprocal(input, out=None) → Tensor
torch.remainder(input, divisor, out=None) → Tensor
torch.rsqrt(input, out=None) → Tensor
torch.sigmoid(input, out=None) → Tensor
torch.sign(input, out=None) → Tensor
torch.sin(input, out=None) → Tensor
torch.sinh(input, out=None) → Tensor
torch.sqrt(input, out=None) → Tensor
torch.tan(input, out=None) → Tensor
torch.tanh(input, out=None) → Tensor
torch.trunc(input, out=None) → Tensor