Citation: |
Personalities : [raid10] [raid6] [raid5] [raid4] [raid0] [linear] md0 : active raid5 sdc1[3](S) sdf1[2] sde1[1] sdd1[0] 976488448 blocks super 1.2 level 5, 512k chunk, algorithm 4 [3/3] [UUU] bitmap: 0/4 pages [0KB], 65536KB chunk md1 : active raid5 sdf2[2] sde2[1] sdd2[0] 976492544 blocks super 1.2 level 5, 512k chunk, algorithm 4 [3/3] [UUU] bitmap: 0/4 pages [0KB], 65536KB chunk md2 : active raid0 md1[1] md0[0] 1952718848 blocks super 1.2 512k chunks unused devices: <none> |
Citation: |
/dev/md0: Version : 1.2 Creation Time : Wed Apr 19 07:41:46 2017 Raid Level : raid5 Array Size : 976488448 (931.25 GiB 999.92 GB) Used Dev Size : 488244224 (465.63 GiB 499.96 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Apr 19 21:26:57 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : parity-first Chunk Size : 512K Name : debian-nas:0 (local to host debian-nas) UUID : 6487007f:850aba30:162234b5:d8f01b37 Events : 3523 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1 2 8 81 2 active sync /dev/sdf1 3 8 33 - spare /dev/sdc1 |
Citation: |
/dev/md1: Version : 1.2 Creation Time : Wed Apr 19 07:42:42 2017 Raid Level : raid5 Array Size : 976492544 (931.26 GiB 999.93 GB) Used Dev Size : 488246272 (465.63 GiB 499.96 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Apr 19 17:27:39 2017 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : parity-first Chunk Size : 512K Name : debian-nas:1 (local to host debian-nas) UUID : c8cf17f3:06d85303:242a5a8d:af180452 Events : 1676 Number Major Minor RaidDevice State 0 8 50 0 active sync /dev/sdd2 1 8 66 1 active sync /dev/sde2 2 8 82 2 active sync /dev/sdf2 |
Citation: |
/dev/md2: Version : 1.2 Creation Time : Wed Apr 19 16:26:07 2017 Raid Level : raid0 Array Size : 1952718848 (1862.26 GiB 1999.58 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Apr 19 16:26:07 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : debian-nas:2 (local to host debian-nas) UUID : bd68554d:dfb5c20a:072aaf9c:58bea7bf Events : 0 Number Major Minor RaidDevice State 0 9 0 0 active sync /dev/md0 1 9 1 1 active sync /dev/md1 |
Citation: |
# mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR h2l29@localhost # definitions of existing MD arrays # This configuration was auto-generated on Tue, 11 Apr 2017 11:05:52 +0200 by mkconf ARRAY /dev/md0 uuid=6487007f:850aba30:162234b5:d8f01b37 spare-group=spare ARRAY /dev/md1 uuid=c8cf17f3:06d85303:242a5a8d:af180452 spare-group=spare ARRAY /dev/md2 uuid=bd68554d:dfb5c20a:072aaf9c:58bea7bf spare-group=spare |
Citation de WildTux: |
nvidia-prime Le mien de préfèrence, plus simple que celui d'ubuntu et multi os. nvidia-prime permet un réelle utilisation de la carte contrairement à Bumblebee qui virtualise le GL. Tu peux faire tourner pas mal de choses avec une 630m (ex: TB 2013 en medium et j'ai une 610m) |
Citation de zentux: |
H2L29 pourquoi cette question ? Est tu un joueur ? Souhaite tu connaître la capacité de cette carte sur Linux ? |