基于keras2.0的多输入多输出模型

2024-01-10

我在网上找了好久。但我什么也没发现。请帮助我或尝试给我一些关于如何实现这一目标的想法。

我已经构建了 3 个输入和 2 个输出模型,如下所示。

the code block is from def build_srgan_model in class SRGANNetwork:

    ip = Input(shape=(self.img_width, self.img_height,3), name='x_generator')
    ip_gan = Input(shape=(large_width, large_height,3), name='x_discriminator') 
    ip_vgg = Input(shape=(large_width, large_height,3), name='x_vgg')

    sr_output = self.generative_network.create_sr_model(ip)
    self.generative_model_ = Model(ip, sr_output)

    gan_output = self.discriminative_network.append_gan_network(ip_gan)
    self.discriminative_model_ = Model(ip_gan, gan_output)

    gan_output = self.discriminative_model_(self.generative_model_.output)
    vgg_output = self.vgg_network.append_vgg_network(self.generative_model_.output, ip_vgg)

    self.srgan_model_ = Model(inputs=[ip, ip_gan, ip_vgg], outputs=[gan_output, vgg_output])

这是完整的代码,太长了。

VGG网络类:

'''
Helper class to load VGG and its weights to the FastNet model
'''

def __init__(self, img_width=384, img_height=384, vgg_weight=1.0):
    self.img_height = img_height
    self.img_width = img_width
    self.vgg_weight = vgg_weight

    self.vgg_layers = None

def append_vgg_network(self, x_in, true_X_input, pre_train=False):

    # Append the initial inputs to the outputs of the SRResNet
    x = merge([x_in, true_X_input], mode='concat', concat_axis=0)

    # Normalize the inputs via custom VGG Normalization layer
    x = Normalize(name="normalize_vgg")(x)

    # Begin adding the VGG layers
    x = Convolution2D(64, 3, 3, activation='relu', name='vgg_conv1_1', border_mode='same')(x)

    x = Convolution2D(64, 3, 3, activation='relu', name='vgg_conv1_2', border_mode='same')(x)
    x = MaxPooling2D(name='vgg_maxpool1')(x)

    x = Convolution2D(128, 3, 3, activation='relu', name='vgg_conv2_1', border_mode='same')(x)

    if pre_train:
        vgg_regularizer2 = ContentVGGRegularizer(weight=self.vgg_weight)
        x = Convolution2D(128, 3, 3, activation='relu', name='vgg_conv2_2', border_mode='same',
                          activity_regularizer=vgg_regularizer2)(x)
    else:
        x = Convolution2D(128, 3, 3, activation='relu', name='vgg_conv2_2', border_mode='same')(x)
    x = MaxPooling2D(name='vgg_maxpool2')(x)

    x = Convolution2D(256, 3, 3, activation='relu', name='vgg_conv3_1', border_mode='same')(x)
    x = Convolution2D(256, 3, 3, activation='relu', name='vgg_conv3_2', border_mode='same')(x)

    x = Convolution2D(256, 3, 3, activation='relu', name='vgg_conv3_3', border_mode='same')(x)
    x = MaxPooling2D(name='vgg_maxpool3')(x)

    x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv4_1', border_mode='same')(x)
    x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv4_2', border_mode='same')(x)

    x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv4_3', border_mode='same')(x)
    x = MaxPooling2D(name='vgg_maxpool4')(x)

    x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv5_1', border_mode='same')(x)
    x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv5_2', border_mode='same')(x)

    if not pre_train:
        vgg_regularizer5 = ContentVGGRegularizer(weight=self.vgg_weight)
        x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv5_3', border_mode='same',
                      activity_regularizer=vgg_regularizer5)(x)
    else:
        x = Convolution2D(512, 3, 3, activation='relu', name='vgg_conv5_3', border_mode='same')(x)
    x = MaxPooling2D(name='vgg_maxpool5')(x)

    return x

def load_vgg_weight(self, model):
    # Loading VGG 16 weights
    if K.image_dim_ordering() == "th":
        weights = get_file('vgg16_weights_th_dim_ordering_th_kernels_notop.h5', THEANO_WEIGHTS_PATH_NO_TOP,
                               cache_subdir='models')
    else:
        weights = get_file('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5', TF_WEIGHTS_PATH_NO_TOP,
                               cache_subdir='models')
    f = h5py.File(weights)

    layer_names = [name for name in f.attrs['layer_names']]

    if self.vgg_layers is None:
        self.vgg_layers = [layer for layer in model.layers
                           if 'vgg_' in layer.name]

    for i, layer in enumerate(self.vgg_layers):
        g = f[layer_names[i]]  #g is i layer weight of .h5  //dict{'weight_names':wight]}
        weights = [g[name] for name in g.attrs['weight_names']]
        layer.set_weights(weights)

    # Freeze all VGG layers
    for layer in self.vgg_layers:
        layer.trainable = False

    return model

类鉴别器网络:

def __init__(self, img_width=384, img_height=384, adversarial_loss_weight=1, small_model=False):
    self.img_width = img_width
    self.img_height = img_height
    self.adversarial_loss_weight = adversarial_loss_weight
    self.small_model = small_model

    self.k = 3
    self.mode = 2
    self.weights_path = "weights/Discriminator weights.h5"

    self.gan_layers = None

def append_gan_network(self, true_X_input):

    # Normalize the inputs via custom VGG Normalization layer
    x = Normalize(type="gan", value=127.5, name="gan_normalize")(true_X_input)

    x = Convolution2D(64, self.k, self.k, border_mode='same', name='gan_conv1_1')(x)
    x = LeakyReLU(0.3, name="gan_lrelu1_1")(x)

    x = Convolution2D(64, self.k, self.k, border_mode='same', name='gan_conv1_2', subsample=(2, 2))(x)
    x = LeakyReLU(0.3, name='gan_lrelu1_2')(x)
    x = BatchNormalization(mode=self.mode, axis=channel_axis, name='gan_batchnorm1_1')(x)

    filters = [128, 256] if self.small_model else [128, 256, 512]

    for i, nb_filters in enumerate(filters):
        for j in range(2):
            subsample = (2, 2) if j == 1 else (1, 1)

            x = Convolution2D(nb_filters, self.k, self.k, border_mode='same', subsample=subsample,
                              name='gan_conv%d_%d' % (i + 2, j + 1))(x)
            x = LeakyReLU(0.3, name='gan_lrelu_%d_%d' % (i + 2, j + 1))(x)
            x = BatchNormalization(mode=self.mode, axis=channel_axis, name='gan_batchnorm%d_%d' % (i + 2, j + 1))(x)

    x = Flatten(name='gan_flatten')(x)

    output_dim = 128 if self.small_model else 1024

    x = Dense(output_dim, name='gan_dense1')(x)
    x = LeakyReLU(0.3, name='gan_lrelu5')(x)

    gan_regulrizer = AdversarialLossRegularizer(weight=self.adversarial_loss_weight)
    x = Dense(2, activation="softmax", activity_regularizer=gan_regulrizer, name='gan_output')(x)

    return x

def set_trainable(self, model, value=True):
    if self.gan_layers is None:
        disc_model = [layer for layer in model.layers
                      if 'model' in layer.name][0] # Only disc model is an inner model

        self.gan_layers = [layer for layer in disc_model.layers
                           if 'gan_' in layer.name]

    for layer in self.gan_layers:
        layer.trainable = value

def load_gan_weights(self, model):
    f = h5py.File(self.weights_path)

    layer_names = [name for name in f.attrs['layer_names']]
    layer_names = layer_names[1:] # First is an input layer. Not needed.

    if self.gan_layers is None:
        self.gan_layers = [layer for layer in model.layers
                            if 'gan_' in layer.name]

    for i, layer in enumerate(self.gan_layers):
        g = f[layer_names[i]]
        weights = [g[name] for name in g.attrs['weight_names']]
        layer.set_weights(weights)

    print("GAN Model weights loaded.")
    return model

def save_gan_weights(self, model):
    print('GAN Weights are being saved.')
    model.save_weights(self.weights_path, overwrite=True)
    print('GAN Weights saved.')

类生成网络:

def __init__(self, img_width=96, img_height=96, batch_size=16, nb_upscales=2, small_model=False,
             content_weight=1, tv_weight=2e5, gen_channels=64):
    self.img_width = img_width
    self.img_height = img_height
    self.batch_size = batch_size
    self.small_model = small_model
    self.nb_scales = nb_upscales

    self.content_weight = content_weight
    self.tv_weight = tv_weight

    self.filters = gen_channels
    self.mode = 2
    self.init = 'glorot_uniform'

    self.sr_res_layers = None
    self.sr_weights_path = "weights/SRGAN.h5"

    self.output_func = None

def create_sr_model(self, ip):

    x = Convolution2D(self.filters, 5, 5, activation='linear', border_mode='same', name='sr_res_conv1',
                      init=self.init)(ip)
    x = BatchNormalization(axis=channel_axis, mode=self.mode, name='sr_res_bn_1')(x)
    x = LeakyReLU(alpha=0.25, name='sr_res_lr1')(x)

    # x = Convolution2D(self.filters, 5, 5, activation='linear', border_mode='same', name='sr_res_conv2')(x)
    # x = BatchNormalization(axis=channel_axis, mode=self.mode, name='sr_res_bn_2')(x)
    # x = LeakyReLU(alpha=0.25, name='sr_res_lr2')(x)

    nb_residual = 5 if self.small_model else 15

    for i in range(nb_residual):
        x = self._residual_block(x, i + 1)

    for scale in range(self.nb_scales):
        x = self._upscale_block(x, scale + 1)

    scale = 2 ** self.nb_scales
    tv_regularizer = TVRegularizer(img_width=self.img_width * scale, img_height=self.img_height * scale,
                                   weight=self.tv_weight) #self.tv_weight)

    x = Convolution2D(3, 5, 5, activation='tanh', border_mode='same', activity_regularizer=tv_regularizer,
                      init=self.init, name='sr_res_conv_final')(x)

    x = Denormalize(name='sr_res_conv_denorm')(x)

    return x

def _residual_block(self, ip, id):
    init = ip

    x = Convolution2D(self.filters, 3, 3, activation='linear', border_mode='same', name='sr_res_conv_' + str(id) + '_1',
                      init=self.init)(ip)
    x = BatchNormalization(axis=channel_axis, mode=self.mode, name='sr_res_bn_' + str(id) + '_1')(x)
    x = LeakyReLU(alpha=0.25, name="sr_res_activation_" + str(id) + "_1")(x)

    x = Convolution2D(self.filters, 3, 3, activation='linear', border_mode='same', name='sr_res_conv_' + str(id) + '_2',
                      init=self.init)(x)
    x = BatchNormalization(axis=channel_axis, mode=self.mode, name='sr_res_bn_' + str(id) + '_2')(x)

    m = merge([x, init], mode='sum', name="sr_res_merge_" + str(id))

    return m

def _upscale_block(self, ip, id):
    '''
    As per suggestion from http://distill.pub/2016/deconv-checkerboard/, I am swapping out
    SubPixelConvolution to simple Nearest Neighbour Upsampling
    '''
    init = ip

    x = Convolution2D(128, 3, 3, activation="linear", border_mode='same', name='sr_res_upconv1_%d' % id,
                      init=self.init)(init)
    x = LeakyReLU(alpha=0.25, name='sr_res_up_lr_%d_1_1' % id)(x)
    x = UpSampling2D(name='sr_res_upscale_%d' % id)(x)
    #x = SubPixelUpscaling(r=2, channels=32)(x)
    x = Convolution2D(128, 3, 3, activation="linear", border_mode='same', name='sr_res_filter1_%d' % id,
                      init=self.init)(x)
    x = LeakyReLU(alpha=0.3, name='sr_res_up_lr_%d_1_2' % id)(x)

    return x

def set_trainable(self, model, value=True):
    if self.sr_res_layers is None:
        self.sr_res_layers = [layer for layer in model.layers
                                if 'sr_res_' in layer.name]

    for layer in self.sr_res_layers:
        layer.trainable = value

def get_generator_output(self, input_img, srgan_model):
    if self.output_func is None:
        gen_output_layer = [layer for layer in srgan_model.layers
                            if layer.name == "sr_res_conv_denorm"][0]
        self.output_func = K.function([srgan_model.layers[0].input],
                                      [gen_output_layer.output])

    return self.output_func([input_img])

SRGAN网络类:

def __init__(self, img_width=96, img_height=96, batch_size=16, nb_scales=2):
    self.img_width = img_width
    self.img_height = img_height
    self.batch_size = batch_size
    self.nb_scales = nb_scales  #factor 4X using nb_scales Upsampling2D

    self.discriminative_network = None # type: DiscriminatorNetwork
    self.generative_network = None # type: GenerativeNetwork
    self.vgg_network = None # type: VGGNetwork

    self.srgan_model_ = None # type: Model
    self.generative_model_ = None # type: Model
    self.discriminative_model_ = None #type: Model

def build_srgan_pretrain_model(self, use_small_srgan=False):
    large_width = self.img_width * 4
    large_height = self.img_height * 4

    self.generative_network = GenerativeNetwork(self.img_width, self.img_height, self.batch_size, self.nb_scales,
                                                use_small_srgan)
    self.vgg_network = VGGNetwork(large_width, large_height)

    ip = Input(shape=(3, self.img_width, self.img_height), name='x_generator')
    ip_vgg = Input(shape=(3, large_width, large_height), name='x_vgg')  # Actual X images

    sr_output = self.generative_network.create_sr_model(ip)
    self.generative_model_ = Model(ip, sr_output)

    vgg_output = self.vgg_network.append_vgg_network(sr_output, ip_vgg, pre_train=True)

    self.srgan_model_ = Model(input=[ip, ip_vgg],
                              output=vgg_output)

    self.vgg_network.load_vgg_weight(self.srgan_model_)

    srgan_optimizer = Adam(lr=1e-4)
    generator_optimizer = Adam(lr=1e-4)

    self.generative_model_.compile(generator_optimizer, dummy_loss)
    self.srgan_model_.compile(srgan_optimizer, dummy_loss)

    return self.srgan_model_


def build_discriminator_pretrain_model(self, use_smal_srgan=False, use_small_discriminator=False):
    large_width = self.img_width * 4
    large_height = self.img_height * 4

    self.generative_network = GenerativeNetwork(self.img_width, self.img_height, self.batch_size, self.nb_scales,
                                                use_small_srgan)
    self.discriminative_network = DiscriminatorNetwork(large_width, large_height,
                                                       small_model=use_small_discriminator)

    ip = Input(shape=(3, self.img_width, self.img_height), name='x_generator')
    ip_gan = Input(shape=(3, large_width, large_height), name='x_discriminator')  # Actual X images

    sr_output = self.generative_network.create_sr_model(ip)
    self.generative_model_ = Model(ip, sr_output)
    #self.generative_network.set_trainable(self.generative_model_, value=False)

    gan_output = self.discriminative_network.append_gan_network(ip_gan)
    self.discriminative_model_ = Model(ip_gan, gan_output)

    generator_out = self.generative_model_(ip)
    gan_output = self.discriminative_model_(generator_out)

    self.srgan_model_ = Model(input=ip, output=gan_output)

    srgan_optimizer = Adam(lr=1e-4)
    generator_optimizer = Adam(lr=1e-4)
    discriminator_optimizer = Adam(lr=1e-4)

    self.generative_model_.compile(generator_optimizer, loss='mse')
    self.discriminative_model_.compile(discriminator_optimizer, loss='categorical_crossentropy', metrics=['acc'])
    self.srgan_model_.compile(srgan_optimizer, loss='categorical_crossentropy', metrics=['acc'])



    return self.discriminative_model_


def build_srgan_model(self, use_small_srgan=False, use_small_discriminator=False):
    large_width = self.img_width * 4
    large_height = self.img_height * 4

    self.generative_network = GenerativeNetwork(self.img_width, self.img_height, self.batch_size, nb_upscales=self.nb_scales,
                                                small_model=use_small_srgan)
    self.discriminative_network = DiscriminatorNetwork(large_width, large_height,
                                                       small_model=use_small_discriminator)
    self.vgg_network = VGGNetwork(large_width, large_height)

    ip = Input(shape=(3, self.img_width, self.img_height), name='x_generator')
    ip_gan = Input(shape=(3, large_width, large_height), name='x_discriminator') # Actual X images
    ip_vgg = Input(shape=(3, large_width, large_height), name='x_vgg') # Actual X images

    sr_output = self.generative_network.create_sr_model(ip)
    self.generative_model_ = Model(ip, sr_output)

    gan_output = self.discriminative_network.append_gan_network(ip_gan)
    self.discriminative_model_ = Model(ip_gan, gan_output)

    gan_output = self.discriminative_model_(self.generative_model_.output)
    vgg_output = self.vgg_network.append_vgg_network(self.generative_model_.output, ip_vgg)

    self.srgan_model_ = Model(input=[ip, ip_gan, ip_vgg], output=[gan_output, vgg_output])

    self.vgg_network.load_vgg_weight(self.srgan_model_)

    srgan_optimizer = Adam(lr=1e-4)
    generator_optimizer = Adam(lr=1e-4)
    discriminator_optimizer = Adam(lr=1e-4)

    self.generative_model_.compile(generator_optimizer, dummy_loss)
    self.discriminative_model_.compile(discriminator_optimizer, loss='categorical_crossentropy', metrics=['acc'])
    self.srgan_model_.compile(srgan_optimizer, dummy_loss)

    return self.srgan_model_

但是我在使用keras2.0运行代码时遇到了这个错误。

 Traceback (most recent call last):

    File "<ipython-input-43-f26b38c03792>", line 1, in <module>
runfile('F:/keras_projects/Super-Resolution-using-Generative-Adversarial-Networks-master/models.py', wdir='F:/keras_projects/Super-Resolution-using-Generative-Adversarial-Networks-master')

    File "C:\Program Files\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)

    File "C:\Program Files\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

    File "F:/keras_projects/Super-Resolution-using-Generative-Adversarial-Networks-master/models.py", line 778, in <module>
srgan_network.build_srgan_model()

    File "F:/keras_projects/Super-Resolution-using-Generative-Adversarial-Networks-master/models.py", line 421, in build_srgan_model
self.srgan_model_ = Model(inputs=[ip, ip_gan, ip_vgg], outputs=[gan_output, vgg_output])

    File "C:\Program Files\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)

    File "C:\Program Files\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1676, in __init__
build_map_of_graph(x, finished_nodes, nodes_in_progress)


    File "C:\Program Files\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1666, in build_map_of_graph
layer, node_index, tensor_index)

    File "C:\Program Files\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1664, in build_map_of_graph
next_node = layer.inbound_nodes[node_index]

    AttributeError: 'NoneType' object has no attribute 'inbound_nodes'

以我有限的经验,我认为这是keras版本造成的。很抱歉没有包含太多的表达。


None

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

基于keras2.0的多输入多输出模型 的相关文章

随机推荐

  • 没有负值的 JSpinner

    我正在 Netbeans 中构建一个小型应用程序 我使用 JSpinner 组件来设置产品的数量 如何将微调器设置为仅取正值 Netbeans 内是否有我可以设置的选择或方法JSpinner EXTRA spinner setModel n
  • 如何在不再次拉取子存储库的情况下进行本地克隆?

    我经常使用 Mercurial 在本地存储我的上游克隆 然后在本地再次克隆以适应我的实际工作环境 cd clones hg clone ssh external repo example com some repo path foo cd
  • 无法在初始渲染中找到参考

    我刚刚在官方文档中读到 第一次渲染时没有调用 componentDidUpdate 我想这可能就是为什么我的这个组件第一次渲染时 dom 没有被定义 这是一个弹出模式 当需要编辑页面时会弹出 我还有其他方法可以解决这个问题吗 compone
  • 将文件移动到新目录的批处理命令

    我想编写一个批处理作业 执行时将抓取所有文件C Test Log文件夹并将它们移至新目录C Test 这个新目录的名称为 Backup 名称为 当前日期 因此 完成后 日志文件夹应该为空 所有文件现在都位于新文件夹中 我知道我必须使用MOV
  • Java:如何获取当前音频输入的频率?

    我想分析麦克风输入的当前频率 以使 LED 与播放的音乐同步 我知道如何从麦克风捕获声音 但我不知道 FFT 这是我在寻找获取频率的解决方案时经常看到的 我想测试一下某个频率的当前音量是否大于设定值 代码应该看起来像这样 if freque
  • 自动扩展 YAML 合并的工具?

    我正在寻找一种工具或流程 可以轻松获取包含锚点 别名和合并键的 YAML 文件 并扩展别名并合并到平面 YAML 文件中 仍有许多常用的 YAML 解析不完全支持合并 我希望能够利用合并来保持干燥 但在某些情况下 需要将其构建到更详细的 平
  • 破损的日食可以修复吗?

    几天后 我不能再使用 Ctrl S 等键盘快捷键 因为它会在我的源代码中插入特殊字符 在属性文件中 我注意到 Ctrl S 插入了 u2308 有没有办法修复此问题 而无需重新安装 STS 2 8 1 您可以尝试使用以下命令从命令行启动 E
  • 如何使用授权和客户端 ID 在 C# 中调用 Azure Maps API?

    我正在尝试使用Azure 地图 API使用坐标搜索某个点周围的 POI 但我不知道如何通过添加以下内容来调用 API授权 and 客户端 ID 这是我在 Microsoft 文档网站上尝试该 API 时收到的请求预览 GET https a
  • 续集“findbyid”不是一个函数,但显然“findAll”是

    我遇到了一个非常奇怪的问题 当我尝试调用函数 findAll 时 它工作正常 创建和销毁相同 但是当我尝试调用函数 findById 时 它会抛出 findById 不是函数 与 FindOne 相同 works fine var gamm
  • model.save_weights 是否包含优化器状态?

    如果是 那么他们是如何做到的 我的意思是 假设我有一个通过子类化定制的模型 我的优化器是一个单独的对象 一个命令如何保存两个不同物体的权重 特别是 它如何知道这两个对象是相关的 是由于 model compile 完成的魔法吗 编辑 我刚刚
  • Laravel 图像提交按钮

    我想知道是否有一种方法可以自定义提交按钮的外观 改为图像 拉拉维尔 3 http three laravel com docs 目前 我的提交按钮代码如下所示 Form open project delete DELETE Form hid
  • 在逻辑或可视树中查找工具提示弹出窗口

    说我有一个ToolTip使用 XAML 中指定的样式 如下所示
  • 使用 as.numeric(levels(f))[f] 将数据框中的因子子集转换为数字

    我有一个包含 100 个变量的数据框 我想要将其中的一个子集 例如 dataframename 30 50 转换为它们的原始数值 1 2 3 4 5 我知道我应该使用as numeric levels f f 当我转换一个因子时 但只有当我
  • 如何将一个目录中的所有文件重定向到另一个目录?

    我已将服务器上的文件移至新目录 并希望将所有请求 301 重定向到新目录中的文件 假设我有 域名 com test apples php 域名 com test oranges php 域名 com test bananas php 我如何
  • JavaScript setTimeout 如此不准确的原因是什么?

    我在这里得到了这段代码 var date new Date setTimeout function e var currentDate new Date if currentDate date gt 1000 console log cur
  • C 文件编程 - 使用 POSIX 调用替换文件中的文本

    有没有办法使用 POSIX 调用替换文本文件中的任何关键字 而无需重新创建文件 如果是的话请告诉我该怎么做 提前致谢 如果文本和替换的大小相同 则可以使用模式打开它r 查找 然后写入 如果它们的大小不同 则无法在不重新创建的情况下进行替换
  • 是否可以等待 Device.BeginInvokeOnMainThread 代码完成(继续使用 UI 调用的结果进行后台工作)

    在我的代码中 我有一个名为 ShowMessageBoxAsync 的任务 我想使用此代码向用户显示 并等待 DisplayAlert 并返回结果 像这样 var messageBoxResult wait View ShowMessage
  • System.IO.StreamReader 与 Get-Content 与 System.IO.File

    我一直在比较在Powershell中快速读取相对较大的文本文档的各种方法 这些文件的大小范围为 50kb 200mb 我需要快速解析它们以获取特定的行和 或特定的字符串 读取文件的三个常用工具 我知道 并且没有构建我自己的 C 库 是 Sy
  • 获取ggplot2图例以在r中显示百分号

    下面是我试图解决的问题的可重现示例 我在 ggplot2 中创建了一个热图 一切进展顺利 由于我已经在数据上放置了百分号以与 geom text 一起使用 所以我想让 geom tile 的图例也显示百分号 我现在只能将实际值乘以 100
  • 基于keras2.0的多输入多输出模型

    我在网上找了好久 但我什么也没发现 请帮助我或尝试给我一些关于如何实现这一目标的想法 我已经构建了 3 个输入和 2 个输出模型 如下所示 the code block is from def build srgan model in cl