SciTech-BigDataAIML-Tensorflow-Variables

tf.config.run_functions_eagerly(True)
tf.data.experimental.enable_debug_mode()
tf.debugging.set_log_device_placement(True)
tf.config.set_soft_device_placement(True)

tf_gpus = tf.config.list_physical_devices('GPU')
tf_cpus = tf.config.list_physical_devices('CPU')
tf_logical_gpus = tf.config.list_logical_devices("GPU")
tf_logical_cpus = tf.config.list_logical_devices("CPU")

tf_gpus = tf.config.list_physical_devices('GPU')
tf_logi_gpus = tf.config.list_logical_devices("GPU")
tf_cpus = tf.config.list_physical_devices('CPU')                                                                                                          
tf_logi_cpus = tf.config.list_logical_devices("CPU")
print("TF_PHYS_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_cpus]) or 'None'))
print("TF_LOGI_CPUs:%s" % (', '.join(['%r'%c.name for c in tf_logi_cpus]) or 'None'))
print("TF_PHYS_GPUs:%s" % ('\n  '.join(['%r'% c.name for c in tf_gpus       ]) or 'None'))                                                                
print("TF_LOGI_GPUs:%s\n" % ('\n  '.join(['%r'% c.name for c in tf_logi_gpus]) or 'None'))

# tf_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
# tf.config.experimental_connect_to_cluster(tf_resolver)
# tf.tpu.experimental.initialize_tpu_system(resolver)
# tf_logi_tpus = tf.config.list_logical_devices('TPU')

Lifecycles, naming, and watching

  • tf.Variable instance have the same lifecycle as other Python objects in Python-based TensorFlow,
    When there are no references to a variable it is automatically deallocated.

  • Variables can also be named which can help you track and debug them.
    You can give two variables the same name.

# Create a and b; they will have the same name but will be backed by different tensors.
a = tf.Variable(my_tensor, name="Mark")

# A new variable with the same name, but different value, Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")

# These are elementwise-unequal, despite having the same name
print(a == b)
  • Variable names are preserved when saving and loading models.
    By default, variables in models will acquire unique variable names automatically,so you don't need to assign them yourself unless you want to.

  • You can turn off gradients for a variable by setting trainable to false at creation. Although variables are important for differentiation, some variables will not need to be differentiated. An example of a variable that would not need gradients is a training step counter.
    step_counter = tf.Variable(1, trainable=False)

Placing variables and tensors
For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available.

However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available. By turning on device placement logging (see Setup), you can see where the variable is placed.

Note: Although manual placement works, using distribution strategies can be a more convenient and scalable way to optimize your computation.
If you run this notebook on different backends with and without a GPU you will see different logging. Note that logging device placement must be turned on at the start of the session.


tf.debugging.set_log_device_placement(True)

with tf.device('CPU:0'):
  a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
  b = tf.Variable([[1.0, 2.0, 3.0]])

with tf.device('GPU:0'):
  # Element-wise multiply
  k = a * b

print(k)

Note: Because tf.config.set_soft_device_placement is turned on by default,
even if you run this code on a device without a GPU, it will still run,
The multiplication step will happen on the CPU.

print("Tensorflow CPUs:%s%s\n" % (                                                                                                                        
    '\n  phys:' + (', '.join(['\n    %r'%(c.name,) for c in tf_cpus        ]) if tf_cpus         else ' None'),
    '\n  logi:' + (', '.join(['\n    %r'%(c.name,) for c in tf_logical_gpus]) if tf_logical_cpus else ' None'),
))
print("Tensorflow GPUs:%s%s\n" % (
    '\n  phys:' + (', '.join(['\n    %r'%(g.name,) for g in tf_gpus        ]) if tf_gpus         else ' None'),
    '\n  logi:' + (', '.join(['\n    %r'%(g.name,) for g in tf_logical_gpus]) if tf_logical_gpus else ' None'),
))


with tf.device('CPU:0'):
  a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
  b = tf.Variable([[1.0, 2.0, 3.0]])

with tf.device('GPU:0'):
  # Element-wise multiply
  k = a * b


gpus = tf.config.list_physical_devices('GPU')
if gpus: # Create 2 virtual GPUs with 1GB memory each
  logical_gpus = tf.config.list_logical_devices('GPU')
  print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")

  try:
    tf.config.set_logical_device_configuration( tf_gpus[0],
       [ tf.config.LogicalDeviceConfiguration(memory_limit=1024),
         tf.config.LogicalDeviceConfiguration(memory_limit=1024) ] )
  except RuntimeError as e:
    # Virtual devices must be set before GPUs have been initialized
    print(e)

  strategy = tf.distribute.MirroredStrategy(tf_gpus)
  with strategy.scope():
    inputs = tf.keras.layers.Input(shape=(1,))
    predictions = tf.keras.layers.Dense(1)(inputs)
    model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
    model.compile(loss='mse', optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))


posted @ 2024-01-01 19:30  abaelhe  阅读(11)  评论(0)    收藏  举报