Thursday, November 10, 2022

windows 跨网段访问samba you can't access this shared folder because your org

 提示错误:

      You can\'t access this shared folder because your organization\'s security policies block unauthenticated guest access.
      These policies help protect your PC from unsafe or malicious devices on the network.

原因:

  因为Window10 更新安全策略,默认禁止访问无密码的Samba共享

解决:

  在Samba中增加用户名密码配置, 并在配置文件smb.conf 中注释掉 map to guest = bad user 

          如:    

    # This option controls how unsuccessful authentication attempts are mapped
    # to anonymous connections
    #map to guest = bad user

Friday, April 29, 2022

Cisco ACI Inter VRF/Tenant Route Leaking Design – Simplified!

 There is a difference between something you know and something you understand. Recently, I came across such kind of a situation, when I realized I perfectly knew how to configure Inter VRF communication in ACI, but the in-depth understanding was missing. The intention of this article is not to share my experience, but the learning.

Please note that Shared Services term which is widely used in most of the documents is nothing but about Inter VRF/Tenant Communication.

I would like to mention, the level of documentation Cisco has done for ACI is commendable. You can find a detailed guide for all the bells and whistle in ACI on the Cisco official site.

Then, Why Am I Writing This?

Well, I thought there is a very critical design guideline around Inter VRF/Tenant route leaking methods in ACI, which I should highlight. So, just trying to do that…

Note: This is an advanced topic in ACI and I assume you have the working knowledge of ACI components like Tenant, EPG, BD, VRF, Contract etc. So, let’s begin…

Need for Inter VRF/Tenant Communication

The first thing we need to understand is in which cases Inter VRF/Tenant communication will be needed:

  1. Shared Services – Meaning, you have an EPG hosting common shared services in one VRF (shared service provider) and there can be single or multiple EPGs in another VRF (shared service consumers) using the shared services.
  2. Ad-Hoc – There can be specific two or more EPGs in separate VRFs and you might need them to communicate with each other as per requirement E.g. Multi-tier application setup with EPGs corresponding to every tier in separate VRFs.

Role of Contracts

As you must be aware of the fact that all the communication in ACI is governed by Contracts, even in case of inter vrf communication, we will need contracts between the EPGs. This is where the duality of the contract comes into the picture. So, there are two roles of contracts in case of inter vrf/tenant communication:

  1. Access control with the help of subjects and filters defined in the contracts
  2. Route Leaking between consumer and provider VRFs (I know many of us looked at contracts as ACLs, it’s more than that)

Contract Scope is an important attribute, it can be either of the two mentioned below:

  1. Scope: Tenant – Provider and consumer EPGs are in different VRFs but the same tenant.
  2. Scope: Globe – Provider and Consumer EPGS are in different VRFs as well as the different tenant.

Few Considerations

  • In the case of inter tenant communication between two user tenants, the contract must be created in provider tenant and has to be exported to the consumer tenant. On the consumer side, the exported contract has to be attached to the consumer EPG as Consumed Contract Interface.
  • Common tenant comes with the superpowers, in case you have one EPG in user tenant and another EPG in common tenant, create a contract in common tenant. That eliminates the overhead of exporting the contract, as contracts created in common tenant can be directly attached to EPGs in user tenant as provided or consumed contract.
  • In the case of inter vrf communication within the same tenant, just the contract has to be provided and consumed into respective EPGs like we do normally and contract export is not required.
  • Since the subnets between two VRFs will be exported to each other, they have to be unique and non-overlapping.

Design Approach

The design approach is selected based on whether there will be a single EPG or multiple EPGs (in Provider VRF) serving as the shared service provider to the EPGs in consumer VRF. Basically, there are two ways of doing the route leaking and policy enforcement between EPGs in separate VRFs.

1. Shared Services with Subnet defined under provider EPG (Usage: when there is only single EPG in provider VRF which will serve as a shared service provider) 

2. BD-BD Shared Services (Usage: when there are multiple EPGs in provider VRF that can serve as the shared service providers)

Let's get into more details of both methods one by one:

1. Shared Services with Subnet defined under provider EPG

This is the preferred method to configure inter vrf/tenant communication as per ACI Best practices guide and works perfectly fine for shared services requirement (until you reach the Limitations section below). There are about tons of documents online that would serve as a step by step configuration guide for this method. I am taking an example of EPGs in separate tenant and VRFs to explain this method.

Procedure:

  • Define the Provider side subnet at EPG level and mark it as Shared between VRFs (define the subnet at EPG level even if you have the same subnet defined at BD level in case it is being used by other EPGs also, as in application-centric deployment).
  • Define the Consumer side subnet at BD or EPG level and mark it as Shared between VRFs.
  • Create a contract at provider tenant and attach to the provider EPG as provided contract.
  • Export that contract from provider tenant to consumer tenant, attach to the consumer EPG as consumed contract interface.

A logical representation of the setup with the provider and consumer EPGs in different Tenants is shown in the diagram below:InterVRF.png

What Happens Under the Hood?

Provider Side:

  • As soon as the contract is attached at both provider and consumer EPGs, consumer side prefix P2 is leaked into the routing table of provider side VRF (VRF1).
  • Along with the routes, VNID rewrite information corresponding to the consumer side VRF (VRF2) and ClassID is also installed in VRF1. Please note that ClassID, which is nothing but PC Tag (EPG ID) is set as 0, meaning there is no consumer EPG information associated with the leaked prefix.
  • Policy is also not programmed in provider VRF. So, on the provider side, it just forwards the packet based on routing information and policy is never applied on this side.

Consumer Side:

  • Provider side prefix P1 defined at EPG level is installed into the routing table of consumer side VRF (VRF2).
  • Along with the prefix, VNID rewrite information corresponding to provider side VRF (VRF1) and ClassID is also installed in VRF2. Please note that ClassID at consumer side is corresponding to EPG1 (provider EPG). This can also be considered as 1:1 static mapping of provider side subnet to the provider side EPG.
  • Any incoming or outgoing traffic belonging to the provider subnet(P1) will be classified as EPG1.
  • The contract is programmed in consumer VRF (VRF2). So, in this method, the policy is applied at the consumer side only and never on the provider side.

Limitations

In a nutshell, this method works like having 1:1 static mapping of provider side subnet to the provider side EPG in consumer VRF.

What if there are two provider EPGs using the same subnet as we normally have in application-centric deployment and both need to provide services to the consumer, if we follow the same method, what would be the ClassID (EPG ID) for provider’s leaked subnet installed at Consumer side?

Well, both providers will keep fighting (not literally) and at a time ClassID of only one provider EPG will be programmed in the consumer VRF, keeping another one inaccessible for the consumer (contract drop condition). 

Interesting isn’t it? That’s when the second method A.K.A Poor brother of the first method comes into rescue… 

2. BD-BD Shared Services

In my opinion, this method has been highly underrated due to its low profile appearance in shared services deployment guides. This method is also treated as the workaround for the limitation we saw in the first method.

In BD-BD shared services method, EPGs in both the VRFs form provider as well as consumer relation with each other. It’s kind of 360-degree relationship making both of them as a shared service provider as well as the shared service consumer. Again, I am taking the example of EPGs in separate tenant and VRFs to explain this method.

 Procedure:

  • Define the subnets at BD level for both the EPGs in both VRF and mark them as Shared between VRFs.
  • There is no need to define subnet at EPG level; route leaking is done between BDs.
  • Create a contract in both tenants, provide and consume on both sides. It’s like configuring contracts as we did in the first method but twice and in both directions.
  • What if you don’t need both EPGs to be the provider as well as the consumer? Well, still you need to form consumer as well as provider relation on both sides.
  • So, in that case, you can make the second contract as a dummy contract, with filter mentioning a port on which there is no service running, or denying any unrequired port. It doesn’t need to enable communication, just required to form relationships. To save TCAM space, don’t check on apply in both directions.

A logical representation of the setup with the provider and consumer EPGs in different Tenants is shown in the diagram below:InterVRF2.png

What Happens Under the Hood?

If you see the diagram, everything is identical on both sides. Routes are leaked between both VRFs along with VNID rewrite info; contracts (policies) are programmed in both VRFs.

One thing to notice is ClassID at both the sides are set as 0. This means, there is no EPG information associated with the leaked subnets. Hence, there is no static mapping of the subnet with EPG ID.

In this case what essentially happens is that, to classify the endpoint a lookup is done in actual endpoint table which contains exact /32 host IPs and EPG information as compared to the first method, where the endpoint is classified based on the provider subnet defined at EPG level.

So even if two EPGs are using the same subnet, there won’t be any impact as there is no static mapping of the subnet to EPG is involved, as the endpoint classification process is more granular.

Once the endpoint is classified, the policy is applied as per the contract between both the EPGs.

Limitations

BD-BD shared services method requires both sides to have a provider and consumer contracts, regardless of the fact whether it is needed for communication or not.

This eats up the TCAM space and should be chosen only when you have more than one EPG in the same VRF to be the shared service provider to the consumer.

Quickbits

Verify VNID rewrite information and ClassID on the leaf switches using following commands:

Leaf-1#vsh

Leaf-1#show ip route vrf <tenant:vrf_name> <imported_IP_prefix> detail

Read the VRF crossing information section to get the details. ClassID will be shown in hex and when converted to decimal, gives us the pc tag of EPG.

Hope this post is helpful for all the ACI Experts out there. Your feedback will be highly appreciated!

Regards,

Jayesh

References:

https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/ACI_Best_Practices/b_ACI_Best_Practices.html

https://ciscolive.cisco.com/on-demand-library/?search=aci&search.event=ciscoliveus2018#/session/1509501653465001PRkT

Wednesday, March 23, 2022

钉钉回放

 vConsole.showTab("network");

http://web-old.archive.org/web/20220323041715/https://blog.51cto.com/u_15127579/3971663


https://blog.51cto.com/u_15127579/3971663

Monday, January 31, 2022

ESXi7.0 License

 VMware vCenter 7.0 Standard

104HH-D4343-07879-MV08K-2D2H2
410NA-DW28H-H74K1-ZK882-948L4
406DK-FWHEH-075K8-XAC06-0JH08

VMware vSphere ESXi 7.0 Enterprise Plus
JJ2WR-25L9P-H71A8-6J20P-C0K3F
HN2X0-0DH5M-M78Q1-780HH-CN214
JH09A-2YL84-M7EC8-FL0K2-3N2J2

Friday, January 21, 2022

重置VCSA 6.5的SSO Administrator密码

 重置VCSA 6.5的SSO Administrator密码的流程如下:


1、登录到VCSA 6.5的命令行界面,输入“shell”命令激活bash shell,然后来到如下位置确认缺省的额Domain名字是什么:





/usr/lib/vmware-vmafd/bin/vmafd-cli get-domain-name --server-name localhost




结果如图所示:






2、得到了缺省的SSO Domain讯息后,执行如下命令启动vdcadmintool命令准备恢复密码:





/usr/lib/vmware-vmdir/bin/vdcadmintool




如下图所示: 






3、在这个界面里输入字母“3”重置Account password,然后,输入UPN讯息“ administrator@vsphere.local”,然后可以看到系统生成了新的密码,记录下这个新密码,之后登录到Web Client界面里修改即可;


Wednesday, January 5, 2022

一篇干货满满的 NFS 文章

 目录

NFS

1. 安装

yum install nfs-utils  -y

2. 配置

主要配置文件: /etc/exports

示例配置:

/nfsfile 192.168.10.*(rw,sync,root_squash)
  • /nfsfile 表示 共享的目录,注意该目录的权限,如果我们设置好了其他的内容时,访问还是报错的话,我们可以试着将该目录的权限设置为 777。

  • 192.168.10.* ,指定 IP 允许访问,我们可以设置我们需要访问的客户端 IP 或者网段,不限制的话设置为 *

  • (rw,sync,root_squash)

    参数作用
    ro只读
    rw读写
    root_squash当NFS客户端以root管理员访问时,映射为NFS服务器的匿名用户
    no_root_squash当NFS客户端以root管理员访问时,映射为NFS服务器的root管理员
    all_squash无论NFS客户端使用什么账户访问,均映射为NFS服务器的匿名用户
    sync同步,同时将数据写入到内存与硬盘中,保证不丢失数据
    async异步,优先将数据保存到内存,然后再写入硬盘;这样效率更高,但可能会丢失数据
    anonuid匿名用户ID
    anongid匿名组ID

    请注意,NFS客户端地址与权限之间没有空格。

NFS 指定端口

# 查看基础信息
[root@djx ~]# rpcinfo  -p  localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  36449  nlockmgr
    100021    3   udp  36449  nlockmgr
    100021    4   udp  36449  nlockmgr
    100021    1   tcp  40638  nlockmgr
    100021    3   tcp  40638  nlockmgr
    100021    4   tcp  40638  nlockmgr
#指定 mountd  端口
[root@djx ~]# [root@mail test]# grep  "PORT"  /etc/sysconfig/nfs
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020
# 上面的这些配置,原本默认是注释的,我们需要将# 去除,开启配置。然后我们 还需要在防火墙开启端口111和2049的tcp/udp,开启 tcp 端口 2020、662、892、32803,开启 udp 端口 32769
[root@djx ~]# firewall-cmd  --add-port={111/tcp,111/udp,2049/tcp,2049/udp,32769/udp,2020/tcp,662/tcp,892/tcp,32803/tcp}  --permanent  
[root@djx ~]# firewll-cmd  --reload 
[root@djx ~]# systemctl  restart nfs-server 
[root@djx ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  49166  status
    100024    1   tcp  58683  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32803  nlockmgr
    100021    3   tcp  32803  nlockmgr
    100021    4   tcp  32803  nlockmgr

我看到有些文章说还需要加配置 RQUOTAD_PORT ,但是我没有加该配置,也是可以的,我看 /etc/sysconfig/nfs 文件里面也是没有这个配置的,所以没有加,也可能是版本不一样,我的环境是 CentOS Linux release 7.4.1708 , nfs 版本为: nfs-utils-1.3.0-0.61.el7.x86_64

3. 启动并添加到开机自启

由于在使用NFS服务进行文件共享之前,需要使用RPC(Remote Procedure Call,远程过程调用)服务将NFS服务器的IP地址和端口号等信息发送给客户端。因此,在启动NFS服务之前,还需要顺带重启并启用rpcbind服务程序,并将这两个服务一并加入开机启动项中。

[root@djx ~]# systemctl restart rpcbind
[root@djx ~]# systemctl enable rpcbind
[root@djx ~]# systemctl restart nfs-server
[root@djx ~]# systemctl enable nfs-server

4. NFS 客户端挂载

NFS客户端的配置步骤也十分简单。先使用showmount命令(以及必要的参数,见下表)查询NFS服务器的远程共享信息,其输出格式为“共享的目录名称 允许使用客户端地址”。

showmount命令中可用的参数以及作用

参数作用
-e显示NFS服务器的共享列表
-a显示本机挂载的文件资源的情况NFS资源的情况
-v显示版本号
[root@djx ~]# showmount -e 192.168.10.10
Export list for 192.168.10.10:
/nfsfile 192.168.10.*

然后在NFS客户端创建一个挂载目录。使用 mount 命令并结合-t参数,指定要挂载的文件系统的类型,并在命令后面写上服务器的IP地址、服务器上的共享目录以及要挂载到本地系统(即客户端)的目录。

[root@linuxprobe ~]# mkdir /nfsfile
[root@linuxprobe ~]# mount -t nfs 192.168.10.10:/nfsfile /nfsfile

挂载成功后就应该能够顺利地看到在执行前面的操作时写入的文件内容了。如果希望NFS文件共享服务能一直有效,则需要将其写入到fstab文件中:

[root@linuxprobe ~]# cat /nfsfile/readme
welcome to linuxprobe.com
[root@linuxprobe ~]# vim /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed May 4 19:26:23 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 1 1
UUID=812b1f7c-8b5b-43da-8c06-b9999e0fe48b /boot xfs defaults 1 2
/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/cdrom /media/cdrom iso9660 defaults 0 0 
192.168.10.10:/nfsfile /nfsfile nfs defaults 0 0

5 报错与解决办法

5.1 NFS root 用户挂载但普通用户无写入权限。

最近在使用 NFS 的过程中遇到了这样的问题,就是我们挂载好共享的文件后(挂载只能 root 用户进行挂载),我们用普通用户来对挂载的目录进行创建文件是发现会报错的,会提示权限不足。

这个问题的解决办法是 :通过设置 anonuid=0和 anongid=0 和 all_squash。这个设置实现了无论NFS客户端使用什么账户访问,均映射为NFS服务器的 id 为 0 的用户,也就是 root 用户。这样普通用户也会有权限在该目录下面创建文件的权限,并且创建的文件的所有者是属于 root 的。

扩展: 当我们在客户端和服务端有相同的用户,而且 id 一致的时候,我们可以 设置 anonuid 为一致的 id。,这样我们创建文件的所有者就是 该 id 的所对应的用户了。 注意需要 id 一致哦。

5.2 网络错误 53 内容一

window连接linux nfs服务器 —— 网络错误 53

需要修改配置 ,增加参数: insecure

1561453544020

5.3 网络错误 53 内容二

针对的是 windows 2008 server作为客户端mount的时候

如果我们设置为上面的内容后,发现连接的时候还是报 53的错误。我们可以进行下面的第二步设置。

在 配置文件 /etc/exports 设置读写权限的时候 设置参数 no_root_squash,不设置这个不行。

更改配置后需要重启 nfs server 。

systemctl start nfs-server

5.4 网络错误 53 内容三

我们映射的目录权限最好为 777 ,否则可能访问不到。

5.5 网络错误 53 内容四

客户端进行 mount 路径有误。 看下面示例:

例如映射的 目录是 /home ,那么 mount 的命令是 :

mount  \\192.168.1.23\home  X:\   

例如映射的 目录是 /home/test ,那么 mount 的命令是 :

mount  \\192.168.1.23\home/test  X:\   

注意多级目录后面就不是使用 \ 而是用 / 。

5.6 网络错误 53 内容五

映射为本地的磁盘时,我们要选择我们没有使用的磁盘符。选用 X、Y、W等这些平常一般不使用的盘符号。

6. Win 系统安装 NFS client

Windows 2008 server安装NFS Client所需软件

通过Server Manager,

1.添加角色,选中File Services,然后按照向导提示安装。

2.添加Features,安装Remote Server Administration Tools/Role Administration Tools/File Services Tool/Services for Network File System Tools

需要安装Services For Network File System

然后运行Services For Network File system,启动client for NFS

Win10 安装 NFS client

1561464713795

作者:理想三旬

本文版权归作者所有,欢迎转载,如果文章有写的不足的地方,或者是写得错误的地方,请你一定要指出,因为这样不光是对我写文章的一种促进,也是一份对后面看此文章的人的责任。谢谢。