2024年8月29日 星期四

How to repair and clone disk with ddrescue

 ddrescue is a tool that can be used to repair and clone disks on a Linux system. This includes hard drives, partitions, DVD discs, flash drives, or really any storage device. It performs data recovery by copying data as blocks.


If ddrescue encounters errors from the data it’s trying to copy, it can discard them and keep only the good data. This makes it an ideal tool when trying to recover data from a corrupted disk. In this tutorial, you will learn how to install ddrescue and use it to clone a full disk or partition, and write that data to an empty storage space.

To install ddrescue on UbuntuDebian, and Linux Mint:

$ sudo apt install gddrescue

Clone a partition to image file or other disk

In the section, we will use ddrescue to clone a partition or full disk (the process is the same) to an image file. That file can that be written to another disk or partition afterwards. We will also show the process to clone a partition directly to another disk, bypassing the image file creation and instead creating a direct clone onto new hardware.

  1. First, open a command line terminal and identify the device path to the hard drive or partition that you would line to clone. For this, you can use a tool like lsblkfdisk, etc.
    $ lsblk
    
    Here we find the device path /dev/sdb1 which is the partition we want to clone
    Here we find the device path /dev/sdb1 which is the partition we want to clone
  2. Note that the -d option will force ddrescue to ignore the kernel’s cache and instead access the disk directly.

    ddrescue process of cloning the partition to an image file
    ddrescue process of cloning the partition to an image file
  3. Note that if you are trying to recover data from a corrupted disk, you may want to append the -r option after the first try above. This will instruct ddrescue to retry bad sectors in an effort to recover as much data as possible. You can specify the number of retries after the option. In this example, we will use 3 retries.
    $ sudo ddrescue -d -r3 /dev/sdX backup.img backup.logfile
    
  4. Next, we will copy the new image file to a different disk or partition. We can use an ordinary dd command for this.
    $ sudo dd if=backup.img of=/dev/sdX
    

    Alternatively, the ddrescue command can be used.

    $ sudo ddrescue -f backup.img /dev/sdX clone.logfile
    

    The -f option indicates that we are sending our output to a block device rather than a file.

  5. If you want to clone a disk or partition directly to another, thereby bypassing any image file, you can do so with the following syntax. In this example, we are cloning partition /dev/sdX1 to /dev/sdX2.
    $ sudo ddrescue -d -f /dev/sdX1 /dev/sdX2 clone.logfile
    
  6. Next, we will use the following command syntax to copy the partition to an image file. We are using /dev/sdX in the example below, but you would just need to substitute your own partition or device in place of it. The contents will be written to a file called backup.img.
  7. $ sudo ddrescue -d /dev/sdX backup.img backup.logfile
    


    After completing the steps above, you can access the cloned storage and will hopefully see all of your files there, assuming that ddrescue was successful in recovering them.

PFC优先级流量控制

From :https://support.huawei.com/enterprise/zh/doc/EDOC1100138438/d1e17776

PFC(Priority-based Flow Control,基于优先级的流量控制)也称为Per Priority Pause或 CBFC(Class Based Flow Control),是对Pause机制的一种增强。当前以太Pause机制(IEEE 802.3 Annex 31B)也能达到无丢包的要求,原理如下:当下游设备发现接收能力小于上游设备的发送能力时,会主动发Pause帧给上游设备,要求暂停流量的发送,等待一定时间后再继续发送数据。但是以太Pause机制的流量暂停是针对整个接口,即在出现拥塞时会将链路上所有的流量都暂停。

而PFC允许在一条以太网链路上创建8个虚拟通道,并为每条虚拟通道指定一个优先等级,允许单独暂停和重启其中任意一条虚拟通道,同时允许其它虚拟通道的流量无中断通过。这一方法使网络能够为单个虚拟链路创建无丢包类别的服务,使其能够与同一接口上的其它流量类型共存。

图2-1 PFC的工作机制

图2-1所示,DeviceA发送接口分成了8个优先级队列,DeviceB接收接口有8个接收缓存(buffer),两者一一对应(报文优先级和接口队列存在着一一对应的映射关系),形成了网络中 8 个虚拟化通道,缓存大小不同使得各队列有不同的数据缓存能力。

当DeviceB的接口上某个接收缓存产生拥塞时,即某个设备的队列缓存消耗较快,超过一定阈值(可设定为端口队列缓存的 1/2、3/4 等比例),DeviceB即向数据进入的方向(上游设备DeviceA)发送反压信号“STOP”。

DeviceA接收到反压信号,会根据反压信号指示停止发送对应优先级队列的报文,并将数据存储在本地接口缓存。如果DeviceA本地接口缓存消耗超过阈值,则继续向上游反压,如此一级级反压,直到网络终端设备,从而消除网络节点因拥塞造成的丢包。

“反压信号”实际上是一个以太帧,其具体报文格式如图2-2所示。
图2-2 PFC帧格式
表2-1 PFC帧的定义

项目

描述

Destination address

目的MAC地址,取值固定为01-80-c2-00-00-01。

Source address

源MAC地址。

Ethertype

以太网帧类型,取值为88-08。

Control opcode

控制码,取值为01-01。

Priority enable vector

反压使能向量。

其中E(n)和优先级队列n对应,表示优先级队列n是否需要反压。当E(n)=1时,表示优先级队列n需要反压,反压时间为Time(n);当E(n)=0时,则表示该优先级队列不需要反压。

Time(0)~Time(7)

反压定时器。

当Time(n)=0时表示取消反压。

Pad

预留。

传输时为0。

CRC

循环冗余校验。

总而言之,设备会为端口上的8个队列设置各自的PFC门限值,当队列已使用的缓存超过PFC门限值时,则向上游发送PFC反压通知报文,通知上游设备停止发包;当队列已使用的缓存降低到PFC门限值以下时,则向上游发送PFC反压停止报文,通知上游设备重新发包,从而最终实现报文的无丢包传输。

由此可见,PFC中流量暂停只针对某一个或几个优先级队列,不针对整个接口进行中断,每个队列都能单独进行暂停或重启,而不影响其他队列上的流量,真正实现多种流量共享链路。而对非PFC控制的优先级队列,系统则不进行反压处理,即在发生拥塞时将直接丢弃报文。

但是网络中如果出现大量PFC反压帧,则极有可能诱发网络死锁,出现两个或多个队列发生永久堵塞(等待),每个队列都在等待被其他队列占用并堵塞了的资源,最终导致网络系统性风险。智能无损网络提供了PFC死锁检测功能。当设备在死锁检测周期内持续收到反压帧时,将不会响应,确保不出现PFC死锁情况。

无损与有损的概念

由上述内容可知,报文在以太网络中的无丢包传输是通过PFC流控机制实现的。设备支持基于802.1p优先级的PFC和基于DSCP优先级的PFC:

  • 基于802.1p优先级的PFC:设备将报文中的802.1p优先级值与端口队列一一对应,即优先级值为0对应0号队列、优先级值为1对应1号队列,以此类推。
  • 基于DSCP优先级的PFC:设备根据配置的DiffServ域将报文中的DSCP优先级映射为内部优先级,内部优先级与端口队列一一对应,具体对应关系参见CloudEngine 12800, 12800E系列交换机 配置指南-QoS》中的“优先级映射配置”。

根据报文在网络中传输时是否需要无丢包传输,可以将业务划分为无损业务和有损业务。

  • 无损业务:需要无丢包传输的业务。使能了PFC功能的802.1p优先级或由DSCP优先级映射的内部优先级即为无损优先级,该优先级对应的队列即为无损队列。
  • 有损业务:允许丢包传输的业务。未使能PFC功能的802.1p优先级或由DSCP优先级映射的内部优先级即为有损优先级,该优先级对应的队列即为有损队列

2023年6月26日 星期一

How to use simple speedtest in RaspberryPi CLI

 

pi@ChunchaiRPI2:/tmp $  wget -O speedtest-cli https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py

--2023-06-26 10:43:47--  https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py

Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.110.133, ...

Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.

HTTP request sent, awaiting response... 200 OK

Length: 65334 (64K) [text/plain]

Saving to: ‘speedtest-cli’


speedtest-cli                                        100%[=====================================================================================================================>]  63.80K  --.-KB/s    in 0.08s


2023-06-26 10:43:48 (849 KB/s) - ‘speedtest-cli’ saved [65334/65334]


pi@ChunchaiRPI2:/tmp $ sudo chmod +x speedtest-cli

pi@ChunchaiRPI2:/tmp $ ./speedtest-cli --simple

Ping: 7.342 ms

Download: 301.20 Mbit/s

Upload: 251.35 Mbit/s

pi@ChunchaiRPI2:/tmp $


2023年6月20日 星期二

How to decompile dtb file (Device Tree)

dtc -I dtb -O dts -o devicetree.dts devicetree.dtb


$ sudo apt-get install device-tree-compiler


$ dtc -I dtb -O dts test.dtb > test.dts

$ dtc -I dts -O dtb test.dts > test.dtb


 


reference: https://forum.digilentinc.com/topic/2427-how-to-decompile-dtb-file/



Device Tree, reference:

Device Tree(一):背景介绍  http://www.wowotech.net/device_model/why-dt.html

Device Tree(二):基本概念  http://www.wowotech.net/device_model/dt_basic_concept.html

Device Tree(三):代码分析  http://www.wowotech.net/device_model/dt-code-analysis.html

Device Tree(四):文件结构解析 http://www.wowotech.net/device_model/dt-code-file-struct-parse.html

2023年6月1日 星期四

HGU – Supported Service Scenarios

From https://halny.com/knowledge-base/hgu-supported-service-scenarios/


HGU mode allows to flow multiple traffic classes across VEIP.
All UNI interfaces are belonged to one VEIP and it cannot be controlled by OMCI.
This non-OMCI part can be controlled by Web and Auto provisioning.
Most OLT vendors support dual stack:
– IP-HOST #1 -> MGMT (WEB, XML provisioning) – configure by OMCI (from OLT)
– VEIP (non-OMCI : INTERNET, VoIP, IPTV services) – configure by ONT WEB or provisioning



1.Bridge mode – only INTERNET:1-4/WIFI + MGMT

2.Bridge mode – INTERNET:1-4/WIFI, VoIP Interface + MGMT

3.Bridge mode – INTERNET:1-2/WIFI, IPTV:3-4, VoIP Interface + MGMT

4.Router mode – only INTERNET:1-4/WIFI + MGMT

5.Router mode – INTERNET:1-4/WIFI, VoIP Interface + MGMT

6.Router mode – INTERNET:1-2/WIFI, IPTV:3-4, VoIP + MGMT

SFU – Supported Service Scenarios

From: https://halny.com/knowledge-base/sfu-supported-service-scenarios/


1.Access mode – only Internet

2.Access mode – only IPTV

3.Transparent mode – Internet, IPTV, VoIP

4.VLAN translation – rBSA

5.802.1q in 802.1q Begin/End of Tunnel

6.Transparent 802.1q in 802.1q

7.OSE/MdO

OMCI協議二層功能的模型選擇

From:https://blog.csdn.net/JIANGXIN04211/article/details/48294645

我們知道有兩種大的二層功能,即MAC橋以及802.1p映射。
MAC橋是IEEE 802.1D描述的,有許多的特性,可以基於MAC地址透明轉發(True bridging)或VLAN characteristics(利用VLAN filter)。而映射功能描述了一個用戶側實體到1到8個網絡側流標記的關係。那種映射與只利用VLAN標記中pbit字段作為VLAN filters的MAC橋是相等的。


那兩種基本二層服務能組合實現各種連接要求。有三種大的基本模式:

N:1 bridging多個用戶端口在同一個橋中,而只有一個網絡服務

1:M mapping基於pbit來將一個用戶端口映射成多個網絡側服務

1:P filtering  基於非pbit的VLAN信息來將單一的用戶端口映射成多個網絡側服務

除了以上三個基本可能外,也有四種複雜的組合。即:

N:M bridging mapping顧名思義,N個用戶端口在一個橋中先橋轉發然後再進行基於pbit的映射。

1:MP map-filtering 一個用戶端口做filtering以及mapping

N:P bridging-filtering N個用戶端口在一個橋中先橋轉發然後再進行基於非pbit的VLAN信息的映射

N:MP bridging-map-filtering顯而易見。

系統性地,tag filtering發生在接近MAC橋而不是tagging操作的部位。如下順序:

ANI—Tag operation—Tag filtering—Bridging—Tag filtering—Tag operation—UNI

許多公司實現了非802.1p模型而採用最簡單的bridging模型,而最簡單一種是每個用戶端在一個Bridge中。1:P模型被採用。如果是多端口的CPE設備(一般由Interworking模塊,如Broadlight chip與交換Switch模塊,如Broadcom switch組成),那塊switch chip每個用戶口有個port VLAN(每個端口不一樣),為了區隔彼此。對於Untag流需要映射pbit的話可以採用管理對象(ID:171 Extended VLAN Tagging operation configuration data)屬性來達到要求。

 

參考:G.984.4 Section 8.2.2

How to repair and clone disk with ddrescue

  ddrescue  is a tool that can be used to repair and clone disks on a  Linux system . This includes hard drives, partitions, DVD discs, flas...