kirubakaran
Kirubakaran Athmanathan
k:
- meet aswin
- discuss competitor research use-case
- mattermark newsletter
- do competitor research for histre
- to improve histre's mktg (re marc & steve)
- create some content
- improve email template (postmark)
- in preparation for sending broadcast and drip emails
- leave drip.com soon
- hiring
ralph:
- search v1
- api to share collections directly w users (without teams)
- self-serve account deletion (so i don't get called out about the claim in the landing page)
aleksa:
- webapp: ui improvements to "add notes to collections"
- extn: votes ui improvements
- webapp: inline editing of collection name and description
michael:
- "show, don't tell" in landing page
- incorporate "public collections"?
Highlights
Highlights
Highlights
#colo #deploy #p
Update:
After much testing, I've now confirmed that the method described in this article is no longer necessary, and can be accomplished with just --chdir=path-to-symlink and --pythonpath=path-to-symlink-venv-site-packages
Highlights
Highlights
Vitalik's response:
https://old.reddit.com/r/ethereum/comments/ryk3it/my_first_impressions_of_web3/hrrz15r/
Related:
https://news.ycombinator.com/item?id=29850223
#public
Highlights
Not much upside to Stripe employee stock grant! #public
https://www.teamblind.com/post/Stripes-New-Offer-policy-on-RSU-FzTqrx4K
Highlights
Has info about Hurricane Electric Fremont 2
#public
Upvoted 20 years of homelabs sloomy155's co-location homelab in 2021Hello r/homelab(super long post, maybe longest post ever here?)Long time lurker, first time poster. In fact this is my first Reddit post ever(other than a couple of test posts to try to check the syntax). The only other social media I have an account on is Linkedin and that doesn't get used too much. My username doesn't mean anything it's just a random collection of letters and numbers.I didn't even know the term homelab until I came across this subreddit probably a couple of years ago now? I have read a bunch of posts here, and have seen r/homelabsales and r/DataHoarder as well. It has been interesting to see other people's perspectives.I never viewed my systems as a lab really, I have been hosting my own DNS/email/web/etc on the internet since about 1996. In my earliest days I volunteered at a super tiny ISP that one of my friends helped start and that was my first experience hosting live systems on the internet. That ISP closed down eventually and the remaining services ended up being hosted out of my home(though never was paid, I did it for the experience at the time). Here I am ~25 years later still messing around.If you just want pictures, and don't care about my stories then here they are, I have comments on each image on imgur, small fraction of what I cover below:Homelab starting around 2001 (most geeky I suppose)First co-location server starting around 2006-2008 (Actual pictures taken much later, nothing special)Home lab starting around 2011 (nothing special)Homelab in 2021 (a bit better)Co-location in 2021 (coolest, to me anyway)High level what I use my homelab forDNS, Email, Web hosting my personal stuff.Some email/web/dns hosting for friends/family.DNS: BIND (host my domains and have internal recursive DNS as well)Email: Postfix + Cyrus IMAP + Spamasssin + Anomy Mail Sanitizer (config dates back to 2002ish)Web: ApacheI could easily host all of the pictures I posted but used imgur since that is the more standard reddit thing to do.DB: MySQLOwncloud (been using it since before Nextcloud existed just have stuck with it)Backing up my DVD/Blu Ray collections (over 3,000 discs)Streaming my backed up collections via DLNA to Western Digital Live TV boxes mainly(local LAN only)Off site backup(home<->colo)Librenms monitoringTotal of 29 systems being monitored both at home and at co-locationColo server handy at times for testing network connectivity/routing when I see an issue at work(day job is managing server/storage/network infrastructure). Just having a system in a unique location(relative to work) is handy for diagnostic purposes, though this is way down on the list of priorities.Messing around with software though I do much less of that at home in 2021 than I did in 2001.Every employer I've had for the past 21 years has had extra gear/capacity laying around for me to "lab" stuff out at the office.VPN access (site to site VPN from home->colo and when traveling from phone/laptop->colo)Currently use OpenVPN though have been wanting to check out Wireguard for a while now just haven't gotten around to it.Splunk logging (free license)I have used Splunk professionally for about 15 years now, so made sense to "use" it at home too. (used quotes around use because it really doesn't get used much, maybe once every few weeks I poke at it). After filtering out some crap it says I average about 7MB/day(500MB limit), most recently have 116k events across 18 systems in past 24 hours.Also proxy most web requests from home computers through colo across vpnHave a dedicated VM in my colo for accessing remote Corp IT company network over VPN. Most of my work involves my own datacenter network(which I built from the ground up 10 years ago) at the company which I log in to separately from home.And here begins my homelab stories, hopefully some of you can appreciate them, I'm constantly told I write excellent documentation, so be prepared for some details I spent probably over 7 hours not only writing/revising the post but digging up all of the old specs, pictures from years past).The first phase (starting around 2001)Here are 11 pictures I chose from that era to share. I hope you understand I don't remember nor do I have documented most of the details behind the systems I had at the time. But I had a mixture of mostly tower and some rack mount systems. All x86 systems ran Debian Linux(having switched exclusively to Debian in 1998). And yes I even ran Linux on my desktop(at the time I believe I used AfterStep as my WM, though I did dabble in KDE when it was pre 1.0 in the late 90s?). I even played games on Linux. I played a lot of the original Unreal Tournament(online). Along with several other Loki games that I still have on CD (many of which I have never installed).I also have some non X86 boxes including Sun Ultrasparc, SGI Indy, and a couple of AIX systems too. All of which came from a company I worked at that closed their local office. They developed software for various Unix and Linux systems. I really wanted to take one of their HPUX and Tru64 servers but they couldn't spare them. These non x86 boxes got minimal use.My network was powered by an Extreme Networks Summit 48 10/100 2U switch which I bought off someone I knew online at the time. I think I bought it in 2000 or 2001, and have been a happy user of their gear for the past 20 years(never knew of them before this time). One of the tower servers was from a company called SAG Electronics, I remember today still reading their ads in PC Magazine perhaps, and drooling over their stuff. That wasn't my server it was my friend's who hosted it at my apartment on a dedicated DSL line for his websites.For some reason or another I became a fan of PC Power & Cooling power supplies. I wanted a quality power supply and they seemed to my untrained eye to take more care in making quality products. Maybe I'm wrong and was just lucky but have had good success using their power supplies over the past 20 years now, never had a failure. I have two of their PSUs in place today, one is about a decade old, the other maybe 3-5 years old(both are same model).My DSL connections(one for me, one for my friend, each was 1Mbps/1Mbps) came in using Quest DSL and originally the ISP portion was oz.net. My DSL connection had 8 public IPv4 addresses and I hosted my DNS, web, email etc. That lasted until about 2006 maybe 2007 or even 2008 I don't recall, when Oz's customers got sold yet again to another ISP. This ISP sent me notice saying all my IPs would be changing. That was a deal breaker for me. Changing my IPs when I host authoritative DNS was going to be a real pain. So I decided to go colo at that point.The second phase (starting 2006 maybe 2007 or 2008)I got a server from the company I worked at, this time I have some specs for you:Chassis: Supermicro 811I-420 w/420W PSUMotherboard: Supermicro X6DVA-EGCPU: 2 x Intel Xeon 3.6Ghz 800FSBMemory: 4 x 1GB ECC REG DDRNetwork: 2 x 1Gbps Intel (on board)RAID Card: 3Ware 8006-2Storage Config:2 x 120G SATAI think I upgraded the disks to something bigger but am not sureRAID 1 (hardware)Filesystems: I assume ext3Software: I think Debian Linux and VMware Server (aka GSX)I don't have any pictures of the server in the rack, it was hosted at a small mom+pop ISP, and I was in a shared cabinet, only way I could get access is if I called them then they would drive out and meet me and let me in. Didn't want to upset them by taking pictures or asking to take pictures, not that it was an impressive facility. I remember being worried about internet overcharges and had them cap my bandwidth on their switch/router at 1Mbps, since that is all I had at home, it wasn't going to be a big deal.I was given a new network switch about this time for home, replaced my Extreme Networks Summit 48 which was a 2U switch from the late 90s, with the latest generation of that series(but still an older product) being a Summit 48si, still 10/100 (with gig uplinks, I had nothing that used gig at the time at home). It also used their latest operating system(for non chassis switches anyway), whereas the older switch could not be upgraded further. However it was 1U, and super loud. I did a fan modification to the system, replacing the stock fans with Sunon Maglev fans. I don't know if Noctua was around, I hadn't heard of them at the time. I came across Sunon somewhere and their marketing looked cool to me. I'm not a fan expert. This is the only fan modification I've ever done that I can think of anyway. I try to stay away from these kinds of changes as they more often than not seem to go badly for me(one such change described later). I'm fine with component level stuff but getting into wires and splicing and stuff, makes me uneasy. The mod worked fine. The switch was much quieter, far from quiet but bearable.The third phase (starting in 2011)While I was in transition between second and third phases(moving from Washington state back to California) I hosted my stuff inside Terremark vCloud Express which was a VMware based cloud provider at the time(later acquired by Verizon and eventually spun off or shut down I don't recall). It worked ok for my minimal workload but I really had to limit what I was able to do to keep my costs reasonable. I only used Terremark for a few months I think.Then I purchased a new server from a Supermicro supplier I had been using for many years. I don't have any pictures of this system, in fact I just took it to be recycled a couple months ago having retired it about 3 years ago now. This supplier had a $100/mo data center hosting package unlimited 100mbps bandwidth(and onsite support) which I was excited about. I do have specs for this system even though I don't have pictures:Chassis: Supermicro 815TQ-R450U w/dual PSUMotherboard: Supermicro X8SIU-FCPU: 1 x Intel Xeon X3430 (4 core)Memory: 4 x DDR3 2GB 1333MHz ECC CL9 DIMMLater upgraded to 16GBRAID Card: 3Ware 9750-4i with battery backupStorage Config:4 x Seagate 2TB SASRAID 1+0 (hardware)Filesystems: VMFSSoftware: ESXi 4.1Later updated to 5.0 or 5.1Had IP KVM module built in(though it was unreachable for several years)A few years later this vendor discontinued their data center offering. So I ended up going direct with the data center themselves. I went on site and met with the vendor's staff and they helped move my server to another cabinet(where my gear still sits today) and I got a direct internet link from the data center, all of that for $200/mo instead of $100. Still a decent deal though I mean not much more expensive I think than my first colo and I have 100X the bandwidth and I had about 1/4th or 1/5th of the cabinet to myself(at the time only had the single 1U server).System ran fine for a long time. I did have a couple of hard disk failures, other than that no failures. The 3Ware integration with vSphere was quite limited(I really like 3Ware going all the way back to my original systems in 2001).Meanwhile, at home I had a significantly downsized homelab, consisting of a single beefy(to me) Debian server, with a Soekris net5501 OpenBSD-based firewall.I purchased a refurbished HP xw9400 from their outlet store and added/changed some things to end up with these specsCPU: 2 x AMD Opteron 2480 4-core 2.5Ghz (8 total cores)Memory: 8GBRAID Card: 3Ware 9650SE-4 with battery backupNetwork: 2 x Nvidia Nforce 1Gbps (on board)Video: Nvidia Geforce GT240 (fanless)Storage Config:4 x 2TB Western Digital Raid Edition (maybe RE2) SATA disksRAID 1+0 (hardware)File systems: ext4 I think...Software: Debian LinuxDVD Drives: 1 x HP DVD SATA driveIt was mostly just a file server, my critical systems were hosted at the colo, and I don't recall what else I used it for at the time.I eventually moved the 3Ware controller and drives to a new smaller, quieter AMD Athlon system(pictures of this chassis are later as I reused the chassis and PSU for my Ryzen):Chassis: LIAN LI PC-Q25B Black Aluminum Mini-ITX Tower Computer Case(fans later replaced with Noctua NF-A14 PWM 140mm Case Fan & Noctua NF-F12 PWM 120mm Case Fan)Removed the hot swap backplane it was problematicMotherboard: ASUS M4A88T-I Deluxe Mini ITX w/onboard Radeon videoCPU: AMD Athlon II X3 455 3.3GHz Socket AM3 Triple-Core Desktop ProcessorPower Supply: PC Power and Cooling Silencer MK III 500WMemory: Crucial 8GB (2 x 4GB) 204-Pin DDR3 SO-DIMM DDR3 1333 (PC3 10600)RAID Card: 3Ware 9650SE-4 with battery backup (moved from HP server)Storage Config:4 x 2TB Western Digital Raid Edition (maybe RE2) SATA disks (moved from HP server)RAID 1+0 (hardware)Filesystems: ZFS (for snapshots mainly)Software: Debian LinuxThe PSU was a super tight fit, but everything ran great for years. I'm not sure if there was ever any other 3-core CPUs ever released (I assume 3-core because of CPU binning). Bios had an option to enable the 4th core but I never tried it. Just took this board/cpu to be recycled a couple of months ago. Fortunately I kept it when my Ryzen fried itself(below) to verify that the PSU was still working fine(and it was).My internet connection was protected by a pair (eventually just one active) of Soekris net5501 firewalls:Board/Chassis: Soekris net5501CPU: AMD Geode LX 500 MhzMemory: 512MB (on board)Network: 4 x VIA VT6105M 10/100 (on board)Storage Config:1 x 8GB SanDisk SDCFH-008G CF cardFilesystems: UFSSoftware: OpenBSDMy network switch went from a Extreme Networks Summit 48si at my previous home to a small metal Netgear switch at my new home. Network wasn't complicated enough to need a fancy switch anymore and the power savings and noise reduction was nice too.All of that was protected by my first double conversion UPS:Make/Model: Cyberpower OL1000RMXL2UOutlets: w/8 × NEMA 5-15RManagement: Network cardExpansion: ABP36VRM2U battery expansion packDespite it being a double conversion UPS (I checked the specs again yesterday). this UPS was nearly silent when not on battery power. I really liked that UPS. I'm really mad at myself for frying it years later. I went to replace the batteries in 2016 or 2017. I purchased a volt meter to try to be super careful in wiring them up right. I wired them up right the first time, let it charge. Checked the voltages and determined 2 of the batteries were bad(voltages too low). So I exchanged them. I was lazy and rushed to put the batteries back again and when I turned it on it made a clicking noise. It fried pretty quick. I had wired the batteries wrong that time. Some of the cables were melted. It didn't damage anything though. I did have equipment connected when it fried. It is not the first Cyberpower to sacrifice itself in the line of duty for my equipment(though it is the most recent). I do like Cyberpower having used them since perhaps 2000 or even maybe before that I don't recall. I have used APC as well, though retired my last APC at home in 2011. I protect everything electronic with UPSs, whether it is simple accessories, or TVs, stereos, streaming boxes, everything.The fourth [and current] phase (starting in 2017)I moved yet again, and so it was time to revamp the homelab again. This time(current location) was to the central valley in California, with peak temperatures much hotter than where I was in the bay area before. My little Athlon server worked fine the first year I was here. The 2nd year I decided I wanted to build something new, something that would have better cooling, and better able to handle the hotter temps (highest ambient temps I have noticed at my new server is about 92 degrees F). It's quite possible the older Athlon could do it, but I just wanted something with more airflow. So I built my server goliath. The name makes it sound really big, I guess it is not, I just picked it had a lot of storage (a lot being a lot for me, I have nothing compared to some on r/DataHoarder).When this server started out I moved my 3Ware card yet again to this system, with it's 4x2TB disks, then connected a pair of 6TB disks with ZFS to the motherboard controller, and a single SSD for boot. About two years ago I replaced the 3Ware with the LSI controller and replaced the disks and stuff, so I present the current configuration of the system as it stands now:Chassis: iStarUSA D7-400-6 Silver Aluminum / Steel 4U Rackmount Compact Stylish Chassis 6 External 5.25" Drive BaysCase Fans: 2 x Noctua NF-A8 ULN 80mm 3-pin SSO2-Bearing 1400-1100rpm Premium FanHard drive holders/coolers: 2 x Lian Li 3.5" / 2.5" HDD Rack Kit Model : EX-36B1CPU Fan: Thermaltake CL-P032-CA06SL-A 60mm Engine 27 1U Low-Profile 70WRAID cooler: GDSTIME Graphic Card Fans, Graphics Card Cooler, Video Card Cooler, PCI Slot Dual 90mm 92mm Fans, VGA Cooler(fans later replaced with Noctua NF-A9 PWM, Premium Quiet Fan, 4-Pin (92mm, Brown))This fan "mod" was trivial as the cooler used standard fans with standard connectionsPower Supply: PC Power and Cooling Silencer MK III 500WMotherboard: Gigabyte X150-PRO ECC-CFCPU: Intel Xeon CPU E3-1240L v5 @ 2.10GHz (25W)Memory: 32GB (4 x 8GB DDR4-2400 ECC Unbuffered DIMM CL171Rx81.2V MicronA(Server Premier))Video: ASUS GeForce GT 720 GT720-1GD3/CSM 1GB 64-Bit DDR3Network: 1 x Intel I219-LM 1Gbps port (onboard)RAID Controller: LSI Megaraid 9361-8i 2GB w/Cachevault (battery pack)Storage Config:2 x HGST Ultrastar DC HC320 HUS728T8TALE6L4 8TB 7.2K RPM SATA 6Gb/s 512e 3.5"2 x WD Gold WD8003FRYZ-01JPDB1 8TB 7.2K RPM SATA4 drives above running in RAID 1+0 (hardware)2 x HP(Intel) VK000960GWCFF 960GB SATA SSDRAID 1 (hardware)1 x Western Digital Easystore 10TB USB (offline backups, powered off when not in use)Filesystems: LVM + ext4 and one with reiserfs3Blu Ray Drive: 1 x LG BE12LU30 Blu-ray Disc Rewriter External eSATA/USB 2.0 12x Super Multi Blue LightScribeDVD Drive: 1 x GE24NU40 Super Multi External 24x DVD RewriterSoftware: Devuan Linux 3LSI Card TemperatureI know that LSI cards (mostly in IT mode is it?) are quite popular here, I purchased my LSI card on Ebay(seemed to be a reputable seller and the cost seemed reasonable at $389 for it being new), then purchased the battery pack on Newegg. I had issues getting them to talk to each other had to go to LSI support went back and forth for quite a while. Eventually they replaced both card and battery pack and things have been fine since. During that exchange I asked the support guy (who seemed super cool old school geek guy that I can relate to very laid back probably been there forever) about the temperatures of the LSI card. With my original PCI slot cooler(not the one above), same one I was using on my 3Ware card the LSI chip was hovering at about 60C I think it was. The LSI chip has it's own heatsink+fan as well. That seemed high to me (quite a bit higher than 3Ware was at). The support person said 60C is "OK" but he strongly advised not to let the chip go above 55C (he called it the "ROC" chip I think). Me wanting to be super careful I purchased a new PCI slot cooler with 2x92mm fans and put it right next to the LSI card. I also setup trending of the temperature using Librenms (works fine for home, wouldn't use it for work personally anyway). The chip never seems to hit above 55C even when ambient temperature is 92F. Normally will peak at 54C, but otherwise lower than that. Lowest I have seen is high 40s when the ambient temperature was upper 60s. The Linux web-based management software for the LSI card really sucks compared to 3Ware in my opinion. The CLI is powerful though.Anyway I wanted to call that out as I have seen several posts here and r/datahorder that has folks mentioning LSI chips running much hotter I think I've seen one or two claiming their chips running at 70C+. While they may work at that temp, I just fear it will lower the life of the component. So I am happy keeping mine closer to 50C. But I think it's stupid I need 2x92mm fans to do it. The card needs a better heatsink/fan combo design. I can only recall having maybe 3 RAID controller failures across ~600 or so servers I have managed in the last 20 years(probably 80% of those servers ran 3Ware cards).Favored CPUAlso wanted to give a shout out to the CPU, the Intel Xeon E3-1240L V5. This is a quad core 2.1Ghz Xeon, that runs at only 25W! Oh when I came across that CPU I wanted it so bad(originally wanted another slightly different model that had built in GPU but couldn't find it anywhere). It was SUPER difficult to find it. Several places claimed they had it in stock then I would order and would wait...and wait.. They would say they were waiting on their distributor. After weeks of waiting with no update in site I would cancel my order. Only place I found that had it was Dell. CPU was about $500, the most I had spent on a CPU in a long time. But I really love that its such low power usage but still a full fledged Xeon. I think the CPU is similar to what I have in my newer Lenovo P50 laptop (which uses i7 but has a Xeon option). The Xeon in the goliath system runs super cool as a result. It was quite a step up for video encoding as well vs my other systems at the time.ContainersIn an effort to keep things "cleaner" in this system(having fewer packages installed on the core server), I opted to setup containers with LXC and I run several such containers:VPN - site to site VPN to coloDNS - just runs BINDHandbrake - runs VNC + handbrakemedia - runs my DLNA streaming software TV Mobili (more below)development - rarely used, but mainly for compiling stuff, or building packages.I first setup LXC back in 2014 for my company and do like it(for specific use cases anyway). I have never been a fan of docker style containers. I assumed I would build more with LXC but have yet to do anything more with it.I guess I standardized on LXC for stuff at home, and VMware VMs for stuff at my colo.To ZFS..or not to ZFS..I know ZFS is very popular here. I have used ZFS off and on since probably 2009ish. It has it's use cases for sure. For my personal needs I feel more comfortable with hardware RAID and ext4. I manage servers for a living and do run fibrechannel and iSCSI SANs, as well as NFS and run other filesystems like XFS, and I do run ZFS in some cases at work(only use case is to leverage the compression on low tier MySQL systems). I deployed ZFS on my Athlon server (on top of 3Ware RAID) mainly for snapshots and ran with it for years. I really wanted snapshot support. But at the end of the day I never really used the snapshots. I learned the hard way(at work) the way ZFS behavior changes when it gets to be 80% full. For me that was the deciding factor not to use ZFS in my current build. My main filesystem is 93% full(with 780G free, and 3.5T free in the volume group), and a smaller SSD filesystem(reiserfs3 tons of small files) is 94% full. My filesystems run full like that for a long time. Could be that my use case the ZFS overhead running at 94% full wouldn't be a big deal. But whatever. With 3Ware before, and with LSI now I do have weekly scrubs happening at the controller level. That is good enough for me. EXT4 is the old boring reliable choice so that's what I went with. Most of my backups are done using a tool called rsnapshot(or manual rsync for file server data which doesn't change often). When I got this goliath system I had an idea to use a small ZFS filesystem with dedupe enabled to use with rsnapshot (instead of using hard links). This was with the original 3Ware RAID and 4x2TB disks in RAID 10. I even upgraded the memory to 32GB from 16GB just in case. The filesystem was going to be about 200GB in size I think. I don't know what the issue was but the performance was just terrible I was getting maybe 300 KILOBYTES per second to the filesystem according to iostat. Maybe some weird behavior with 3ware or something I don't know(it certainly wasn't the fastest controller but not that slow). So I quickly abandoned that idea and went back to rsnapshot with hardlinks. It's so very rare that I need to go back to backups for anything. Seems like less than once a year, maybe once every 2 years, and usually for something trivial.Video encodingA few years ago I decided to really up my game with backing up my movies and tv shows. In the end it turned into a big hobby. I have purchased more than 3,000 DVD and Blu Ray discs, probably more than 2,000 of which in the past 5 years. Backing them up and encoding and cataloging them is quite a tedious process. But at one point I got into a groove doing it and got a good process for getting it done accurately. I've never used any peer to peer stuff, no BitTorrent or anything like that. All of my stuff is purchased on disc and stored in CD binders. Originally I would rip and encode using a Linux tool called dvd::rip, which I believe is a Perl based GUI, this was before 2010 I think. It even had a cluster mode where you could distribute the encoding to multiple systems in parallel. I think the codec I used was Xvid at the time. Later h264 came out, and I became aware of Handbrake and have been using that ever since. First on windows, later on Linux. When I got this new Xeon it really boosted my encoding throughput. But I still had a massive backlog and was never able to catch up. Enter my first dedicated encoding system, my Ryzen 3700X:Chassis: LIAN LI PC-Q25B Black Aluminum Mini-ITX Tower Computer Case (re-used case from Athlon above)(fans eventually replaced with Noctua NF-A14 PWM 140mm Case Fan & Noctua NF-F12 PWM 120mm Case Fan)Motherboard: GIGABYTE X570 I AORUS PRO WIFI AMD Ryzen 3000 PCIe 4.0 SATA 6Gb/s USB 3.2 AMD X570 Mini-ITX MotherboardPower Supply: PC Power and Cooling Silencer MK III 500W (re-used from Athlon above)CPU Fan: ZALMAN CNPS8900 Quiet 110mm PWM Fan Long Life Bearing Ultra Quiet Slim CPU CoolerCPU: AMD RYZEN 7 3700X 8-Core 3.6 GHz (4.4 GHz Max Boost) Socket AM4 (65W)Memory: Crucial Ballistix Elite 8GB (2x 4GB) 288-Pin UDIMM DDR4 (PC4-24000) Server Memory Module Kit, CL=15, Unbuffered, 3000 MT/S Speed, NON-ECC, 1.35V, 512Megx 64, Single Rank, x8 BasedNetwork: 1 x Intel 1Gbps port (onboard)Video: MSI GeForce GT 710 Low ProfileStorage Config:1 x SAMSUNG 970 EVO PLUS M.2 2280 250GB PCIeFilesystems: LVM + ext4Software: Linux Mint 20I'm aware at least on non Linux platforms GPU encoding with Handbrake is possible(maybe is on Linux as well these days), though have read that GPU encoding while faster, is lower quality so I would stick to CPU encoding regardless.I sort of expected to use it for SOMETHING other than video encoding, but in the end, when I don't have a lot of stuff to encode, I keep it off, because I'm afraid it may fry itself again. Less than a year after I bought it, it was encoding overnight and when I got up the next day it was down. I don't recall if the screen had anything on it or if it was black, but it was on, could not respond to ping. I turned it off(think I had to yank the power), and it would not turn on again. I tried many times to turn it on, it would not turn on. I removed the Ryzen board+CPU and put the original Athlon board+CPU back in and it powered up right away. So the PSU was fine. I tried powering on the Ryzen again a few more times and the board literally had a mini fireworks display of sparks or something coming out of one of the chips and a puff of smoke. I want to say I've never had a complete motherboard failure AT HOME in more than 20 years(perhaps never in my life). So I was shocked. I completed the RMA process with Gigabyte and they sent me a new board. Was hoping for a newer revision number indicating they improved the board but the revision stayed the same. Fortunately no other components were damaged. System has encoded probably a couple thousand things since without issue. But I'm constantly worried it will fry itself again.I have spent probably thousands of hours ripping, encoding, and cataloging my DVDs and Blu rays. I have just over 10,000 TV episodes and over 700 movies. I struggle hard to find anything else that may remotely interest me at this point, I've literally scrolled though thousands of titles trying to find something else but often come up empty now. Total space for that media is 7.6TB. I "cut the cord" in 2019, and made a gannt chart (WARNING: image size is 21,541 x 4276) recently of the TV series to try to see at what point I lost interest in cable TV. (side note: most gannt chart tools aren't geared for tracking 30 year periods of time, Visio handled it fine though image exporting was a bit problematic). I was a big time TiVo user for 15+ years but the last 3+ years of TV usage TiVo really wasn't recording much at all anymore and I struggled to find anything worthwhile to watch(even with every premium channel). It felt so weird to cut cable tv but I did it. Switched entirely to my home collection(which I had done already about 8 months before cutting cable). I do not do not use any streaming services.I measured the video encoding performance comparing my goliath system running the Xeon, vs the Ryzen, vs my Lenovo P50 running an i7 quad core processor on the same ~1GB DVD RIP in handbrake(probably slightly different versions) using the same encoding settings (very slow and same RF setting h264), all on Linux of courseGoliath with Xeon quad core E3-1240L v5: 32.8 fpsLaptop with quad core i7-6820HQ: 30fpsRyzen with 8-core 3700X: 82.5 fpsIf I get a few new DVDs I'll encode them on Goliath, if I get a bunch of blu rays then they go to Ryzen(the server name is ryzen I was lazy in naming that one). The Ryzen system allowed me to catch up on my encoding backlog, still took a good 6 months I think but it did a great job.I do my streaming with a defunct software product called TV Mobili. I'm probably the only one left in the world that still uses this, the version I have is from 2015. I'm a licensed user and it really works flawlessly for my basic needs of streaming to Western Digital Live TV (also defunct). I have 2 WD TVs in use, and 2 more as spare. I also have a few Rokus which I played with a bit but prefer the WD TV more(rokus sitting on a shelf now). I do not do any transcoding, everything is h264 1080p or below (my TVs are 1080p, no 4K).My firewall had to be upgraded as my Soekris boxes were only 10/100, and my new internet connection was 200Mbit or maybe 250. Soekris themselves seemed to be stagnant (they have since ceased all U.S. operations), and I came across the PC Engines APU2. This seemed like a real good replacement:Board/Chassis: PC Engines APU2 (Fanless)CPU: AMD Embedded G series GX-412TC, 1 GHz (4 cores)Memory: 2GB (on board)Storage: 1 x mSATA Innodisk DEMSR-32GD09BC2DC 32GB 3ME3Filesystems: UFSNetwork: 3 x Intel i211AT 1Gbps ports (onboard)Software: OpenBSD 6.8I have one port connected to my comcast bridged Motorola MB8600 modem, one port connected to my switch, and one to my ASUS RT-AC68U Wireless-AC1900 wifi AP (in "AP" mode - wifi is not granted any access to my internal network). I also have a minimal powerline ethernet setup as well, connected to my livingroom. I have several IP cameras(internal network only) for watching my cat(s) when I am away. Or watching wife too.My switch started out as a basic metal 8-port Netgear, but earlier this year I replaced it with an Extreme Networks Summit X440-8t which I bought on Ebay. It was new, as in never having been used(there is an command in the software to show how many hours the switch has been in use to validate) and the price was great so was real happy to get it. It is fanless, it idles at only 19W, and has basic layer 3 functionality. Total of 12 ports, 8 RJ45 and 4 SFP, all are 10/100/1000, no 10G here. It does run hot to the touch but always well within specs, I think the hottest I have seen it is 55C, and it's normal operating range is up to 68C, currently 45C. This layer 3 switch came in handy later when I wanted to configure some wifi access points for my job before taking them to a brand new office(I have been WFH since about 2014). I had no experience working with these APs but was able to easily create the same VLANs they would use at their destination on my network and just enable routing between the VLANs and off I went.I upgraded my HP xw9400 workstation to 6-core CPUs and 12GB of memory, and added two more DVD drives(helped get through my backlog at one point I probably ripped 40+ DVDs in a day across 4-5 drives), replaced boot disk with SSD. It runs Windows 7 today, and stays off 99% of the time. Only thing I really use it for is dealing with certain Lionsgate Blu Ray movie titles.This is all protected by a new(at the time) Cyberpower OL1000RTXL2U double conversion UPS(no expansion battery pack, no network card), fan runs all the time, very loud took a long time to get used to. This UPS also protects most everything else in my home office including monitors, laptop, accessories everything(not air filter system or paper shredder though). I have been using Network UPS Tools(nut) for 20 years, and I continue to do so today with my current UPSs. I have a Cyberpower PR1500LCD in my livingroom protecting all of my stuff in there. I have no regular computers in my livingroom anymore, so I came up with an idea earlier this year to use one of my Soekris boxes that have been sitting on a shelf for years. They only draw about 5W of power at idle. Just because I wanted to, I setup one of the Soekris boxes with OpenBSD again and use it only to monitor the UPS (just to see the load). Certainly cheaper than buying a network monitor card for the UPS.Co-location in 2021Still part of the same "phase", but I think it deserves it's own section as there's quite a bit of stuff here.These are probably the coolest of all of the recent pictures, at least to me. About 18 months ago I purchased my first Extreme Networks Summit X440-8t switch from Ebay(was new-ish, had 1 hour of usage recorded by previous owners). I installed that switch this past July(so now I have two of these switches). Completely overhauled my network setup with the switch, and used almost every port in the process. But that's ok I don't plan to add anything else(space and power limited).Currently I have two rackmount systems, I'll start with the oldest of the two, a Dell R230 I bought new in late 2018, have upgraded it a bit since here is current config:System: Dell PowerEdge R230CPU: 1 x Intel Xeon CPU E3-1230 v6 @ 3.50GHz (72W)Memory: 4 x 16GB DDR4 Dual Rank Crucial ECCNetwork: 1 x Broadcom BCM5720 Dual port 1Gbps (onboard)1 x Intel E1G44HT Server Adapter I340-T4 (PCIe)Not in use since I installed my switchRAID Card: Dell PERC H730 Adapter 1GB + Battery backupStorage Config:2 x Dell SSDSC2KB960G7R (Intel 960GB SSD) - has 100% of write life left after 4 years!RAID 1 (hardware)2 x HP / Intel SSD DC S3520 Series 960GB 2.5-inch 7mm SATA III MLC (6.0Gb/s) - unfortunately the PERC does not show metrics for these drives I guess because they lack Dell firmware.RAID 1 (hardware)Filesystems: VMFSiDRAC 8 EnterpriseSoftware: ESXi 6.7 Update 3I really wanted another 25W Xeon, but could not find any systems that had it, or even close to it, seems like 72W was the minimum. I was also tired of Supermicro, especially with the poor out of band management. Pales in comparison to iDRAC or HP iLO (which I prefer). Couldn't find a readily equipped server from HP at the time so I went with Dell.Less than a month ago I installed a new member of my family a refurbished Dell R240 from Dell's outlet store (wish it had the LCD those are cool) in my rack. Though it's mainly there as a backup, I still have on site support with Dell for the R230(haven't had to use support yet), the R240 needs more RAM and SSDs before it can be a real backup but I wanted to get the deal while it was there.System: Dell PowerEdge R240CPU: 1 x Intel Xeon E-2234 CPU @ 3.60GHz (71W)Memory: 2 x Hynix HMA81GU7CJR8N-VK 8GB Single RankNo RAID card (decided to save costs on the RAID card(and overhead as far as reduced usable capacity for mirroring) for now anyway, SSDs don't fail often)Onboard SATAStorage: 1 x Samsung 850 EVO 250GBFilesystems: VMFSiDRAC 9 EnterpriseSoftware: ESXi 6.7 Update 3The system is turned on in the picture, I plan to turn it off and leave it off until some time next year when I can upgrade it. Not in any rush.A couple of years ago I added a Terramaster F4-220 NAS. Originally I had 2x8TB disks in the Dell R230 for my file storage, but decided to deploy this dedicated NAS and put only SSDs in the Dell:Model: Terramaster F4-220 2GB MemoryMemory Upgrade: Crucial CT25664BF160B 2GB 204-Pin DDR3 SO-DIMMNetwork: 1 x Realtek 8169 1Gbps (onboard)Storage:2 x Western Digital WD120EFAX-68 12TB1 x Samsung 870 EVO SATA (not in use yet anyway)USB SSD: 1 x HP/Samsung PM863a (MZ-7LM960N) 960GB 2.5-inch 7mm SATA III MLC (6.0Gb/s)Boot diskFilesystems: Linux MD RAID + LVM + ext4Hadn't used MD RAID in probably 20 years!Software: Devuan 2.1I put the 8TB disks that were in my R230 into my goliath server above and purchased two more 8TB to go with them.This past July I added an Intel NUC that I purchased on Black Friday last year and set it up as an ESXi server as well:System: Intel NUC Bean Canyon i7 Kit (Tall)CPU: 1 x Intel Core i7-8559U @ 2.70GHzMemory: Crucial CT2K16G4SFD8266 32GB (2 x 16GB) DDR4Storage:1 x Samsung EVO 860 2TB 2.5" Internal SSD1 x SAMSUNG 970 EVO PLUS M.2 2280 1TB NVMeFilesystems: VMFSNetwork: 1 x Intel 1Gbps (onboard)Software: ESXi 6.7 Update 3It just runs one VM at the moment, hosts my internal Devuan repos. Was blown away by the 5W idle power draw of this thing and so thought I had to deploy it here.I have an idential PC Engines APU2 firewall at my colo.That's it, that's my 20 year history of home labbing. Hope it was a worthwhile read.(Ran into reddit's 40,000 character limit so had to cut some things)(I'll check back later today in case anyone has questions/comments) Submitted November 15, 2021 at 09:25AM by sloomy155 https://www.reddit.com/r/homelab/comments/qulj6s/20_years_of_homelabs/?utm_source=ifttt via /r/homelab
Highlights
Highlights
#public
"Histre is Pinterest for knowledge"
Highlights
Highlights
What boosters should a Pfizer vaccine recipient get? #public
Link to the study: https://www.medrxiv.org/content/10.1101/2021.10.10.21264827v2.full.pdf
Highlights
Highlights
Highlights
Highlights
I watched his video on Prolog and I liked his style. Perhaps this video is similarly nice and you might find it useful? I haven't watched it. #p
Assigning recording privileges to a participant
- In a Zoom Meeting click on Manage Participants.
- In the Participants menu navigate to the participant who will be granted recording privileges. Click More next to their name.
#pub