Letou nhà cái đánh giá

Windows 10 Update Channels: You’re getting SAAAAAAAC’d

Windows 10 Update Channels: You’re getting SAAAAAAAC’d

Dam Good Admin Or at least not entirely useless Toggle mobile menu Toggle search field.
Windows 10 Update Channels: You’re getting SAAAAAAAC’d.
February 18, 2019 12 Comments I’m going to shake it up here and do something totally crazy.
I’m going to talk about operating systems for a moment.
OSD is absolutely something in my wheel-house and I certainly have my own ways of doing things.
However, I don’t really talk that much about it because there’s just so many freaking people blogging about OSD.
I even like all most of them.
Beyond that, I tend to keep things as simple as possible where-ever I can.

I just want Task Sequences to work dangit

However, .

Last week Microsoft has done yet another shake-up in how we service Windows 10

As usual, there seems to be a fair amount of consternation.
Personally, I think it’s a bit of a nothing-burger and while change is hard this is the final nail in the coffin to get Windows as a Service’s (WaaS) cadence to where it should have always been.
To really make sense of why they would make this change you need to understand the whole journey.
What Series of Choices Have Led to This Moment?.
When the first versions of Windows 10 were released we were given 3 branches: Current Branch (CB).

Current Branch for Business (CBB)

and the Long Term Servicing Branch (LTSB).
The idea was that Microsoft would release CB/CBB 3 or more times a year and you would have 14 months of support from the moment it was released to Current Branch.
LTSB would follow the old pattern of 5- year release cycles and 10-years of support.
The reaction of most admins was swift and predictable: Long Term Serving Branch.
In reply Microsoft started threatening to sneak into your house and beat you to death with your own shoes.
It was a risk most were willing to take.
Ok, fine, we’re taking Office 365 and going home.
Crap … looked like we were just going to have to buck up and deal with it.
The messaging surrounding the branches from Microsoft was widely interpreted to mean that CB was for consumers and CBB was for … well … businesses.
It’s right there in the name.
The belief being that the ‘ for Business ’ designation was some kind of metric-driven decision made by Microsoft.
In reality it meant nothing of the sort.
I’m not sure I’d call the CBB release completely arbitrary but it didn’t really confer anything special from Microsoft apart from the fact that the CB release had been out for a few months and didn’t destroy everything in it’s path.
It most certainly didn’t mean that all your LOB apps were safe.
The problem with waiting for the CBB designation was that the support clock started ticking the moment CB was released.
If it took 4 months for Microsoft to apply that rubber stamp then you just lost 4 out of the 14 months that the release was supported.
If you waited for CBB to even start working on it then that was a hard hit to take.
The end result was that every single day for the rest of your career you were going to be actively engaged in deploying the latest version of Window 10.
So while Windows 10 might truly be the last version of Windows you ever deploy you will never … ever … stop deploying it.
This was your life now.
The Peasants Revolt.
This might be surprising to some people but apparently there’s a few administrator s out there who didn’t get into IT to spend every waking hour rolling out operating systems.
“Don’t worry” they said.
“It’ll be easy” they said.
“Servicing is the coolest” they said.
“Set it and forget it” said the TV salesman.
If we can’t trust TV advertisement s then what can we trust in this world.
If any of the above were true then things might have ended there.
Set up a servicing plan or WUfB and push out Feature Updates like they were any other update and move on with your life.
If only that had actually work ed.
I don’t want to turn this into a Festivus airing of grievances but there were several show stoppers.
The end result was that most organization s had to turn to Task Sequences to get things to actually work in the real world.
It was about this time that administrator s realized that a 14 month support just was not going to work.
You’re losing roughly 4 months waiting for CB to be christened CBB.
Then you have to get a TS working and find out all the new things it’s going to break in your org.
If a release proved to be a real turd-bucket then you were in deep doo-doo.
The version you’ve barely finished deploying to the last of your systems is about to go out of support and the new release is broken.
It wasn’t a fun place to find yourself.
My TAM at the time was unwilling to confirm in writing but he made it abundantly clear that he was getting reamed out by every single customer he had.
The guy was quite literally walking funny from all the negative feedback.
No one actually responsible for trying to pull this stuff of believed it could work as designed.
Hamstrung By their Own Product.

Eventually the Windows product team finally started groking the above reality

Well that and they had a new idea: Microsoft 365

In April 2017 they announced that Win 10 and Office 365 would join each other in a very specific cadence: two releases a year (March and September) supported for 18 months.
In July 2017 they similarly announced that Branches were dead and long-live Channels.
Crucially, in that article Mr.
Niehaus himself laid out the following: “The Semi-Annual Channel replaces the Current Branch [CB] and Current Branch for Business [CBB] concepts” [emphasis mine].
So branches are now channels, we’re down to just two (Semi-Annual Channel and Long Term We Kill You Branch), and SAC gets 18 months of support thus essentially erasing the 4 months lost waiting for the previous CBB designation that was kind of meaningless.
But wait … there was a problem.
The policies that make up Windows Update for Business (WUfB) and are used by ConfigMgr’s (Win 10) Servicing Plans have CBB hardcoded into them as well as the OS itself.

While they didn’t really formerly announce it (I could find no article introducing SAC-T)

Microsoft was forced to put out a Semi-Annual Channel (Targeted) (SAC-T) in order for those two products, and those two products alone, to function as intended.
This is laid out in a delightful post by John Wilcox (Microsoft WaaS Evangelist) that I appreciate for this little bit of honesty.
Of their original plans laid in 2015 he writes: “It was also a complete failure.” The key take away from this last article is that right from the start they planned for SAC-T to go away.
It was always a temporary stop-gap until they could update their tools.
We Want More!.
This is a bit of a tangent but the problem with customers is that they’re kinda needy.
For large enterprises, even 18 months just wasn’t enough.
In my own organization at the time we were looking down the barrel of 1703’s eminent EOL without having a solid In Place Upgrade strategy in place.
Plus, .

We weren’t even half way through our Windows 10 rollout in the first place

If it takes you 3 years to get to Win 10 it’s beyond optimistic to think you can do IPUs every single year.
At least out of the gate.
Put out a few rock solid no-nonsense releases and maybe we can talk.
So in September 2018 Microsoft announced yet another change: The Fall (September) release would be supported for 30 months for those using the Enterprise and Education SKUs.
Pro users just got the sad trumpet and those of us with deeper pockets rejoiced.
A Moment of Silence for SAC-T.
Everything above leads to the recent announcement that SAC-T is officially dead and that the 1903 release of Windows 10 will not have that channel.
Since this is change of any sort to a process that has changed so very much in so very little time (see above) there is the usual amount of confusion and consternation.
What’s an Administrator to Do?.

What does the retirement of SAC-T actually mean and what do you

dear administrator, need to do.
As the highest paid consultants will tell you: it depends.
In this case it entirely depends on how you roll out your In-Place Upgrades: ConfigMgr Task SequencesNothingConfigMgr Software UpdatesNothingWindows Server Update Services (WSUS)NothingConfigMgr Servicing PlansReconfigureWindows Update for Business (WUfB)Reconfigure I can’t give you actual numbers, you’d have to pester the product group for that, but the overwhelming majority of the ConfigMgr admins that I know are using Task Sequences for their In-Place Upgrades.
Those that want to get away from TS’s are testing the waters by manually releasing the Feature Updates as software updates.

The retirement of SAC-T means literally nothing to those groups

If I just wasted 30 minutes of your time … I’m sorry … but you can go back to your regularly scheduled programming of Reddit, Twitter, and Slack.

What if I Love my WUfB and Servicing Plans?

If you are one of the few and proud that are using WUfB or Servicing Plans then you do have to consider these changes and probably rework your configuration.
However, it’s ridiculously easy to handle since you now only have a single channel to worry.
Bump back any existing SAC release rings you have and replace your SAC-T with a SAC ring with a very minimal, if any, delay.
Keep in mind that for 1903 and only 1903 they are adding a built-in 60 day additional delay for those configured for SAC.
That’s really it.
It should take all of 5 minutes of actual real work and 10 hours of change management documentation and review.
The only thing you could truly complain about with this change is that you are losing the increased delay inherent in waiting for CBB/SAC to be released.
If you are using ConfigMgr Servicing Plans that’s actually a big deal.
With Servicing Plans you can only delay the releases by 120 days so losing 60 days or more is troublesome.
For you weirdos here’s a User Voice Item I just cooked up: Bring Servicing Plans into Parity with WUfB/Intune or Kill Them.
If you’re using WUfB then you can delay Feature Updates for 365 days making it less of a problem.
If you can’t get a release out in a year then quite frankly you’re not ready for the kind of ‘modern’ environment that WUfB/Servicing Plans are made for.

You need to move to Task Sequences or just manually deploying them via Software Updates

The Moral: We’re Finally There!.
The reason I wanted to write this all out in long-form is because it tells a story.
A slightly hopeful story even.
When Microsoft first announced their WaaS concept in 2015 pretty much every administrator I talked to said it was doomed to fail.
You have to crawl before you can walk and Microsoft was expecting a bunch of desk jockeys to compete in a 100 yard dash where losing was a career limiting move.
In 2015 we were given three branches (CB, CBB, LTSB) released whenever Microsoft got around to it and supported for 14 months with roughly 4 of those eaten up waiting for the almost meaningless CBB stamp.
Fast forward to 2019.
We now have two channels (SAC and LTSC) that are released on a specified schedule and are supported for up to 30 months.
This is exactly what we asked for from the start: fewer releases supported for longer with a schedule we could plan around.
This is simple and straightforward.
Everyone should be able to understand and plan around this cadence.
Instead of wailing and gnashing your teeth because change is happening be glad that we are getting what we wanted in the first place.
Now the task is to start stringing together high quality, trouble-free releases so that we are all comfortable putting the Task Sequences down and start using WUfB/Servicing.
When 1809 was released I don’t think I was alone in going to my whiteboard and writing:.
Configuration Manager, Operating Systems, Windows 10 Operating SystemsWindows 10 Previous post.
Servicing Stack Updates: What Is This Madness.
Next post.
Making Sense of Win 10’s Quality Update Cadence 6 Comments.
Wm June 21, 2019 at 12:48 pm Thanks for the good article.
Enjoyed the Read.
JAy February 26, 2019 at 5:26 pm PORK CHOP SANDWICHES.
Just kidding.
Fantastic video Reply bryandam February 26, 2019 at 8:30 pm Yes.
Someone got the reference.
cmetcalf202 October 22, 2019 at 12:02 pm The title got my attention.
lol “Nice try blanco nino!” xD Now I have to go and watch them all again.
Great article.
Reply bryandam October 22, 2019 at 8:34 pm Haha, they are freaking classic.
My wife and I frequently say Pork Chop Sandwiches when we need to GTFO of a particular situation.
Mike February 22, 2019 at 8:25 am If only we could get O365 support AND LTSB, then I would join in the rejoicement Reply.
6 Pingbacks.
Intune Patching Part 1: Human Translation – Dam Good Admin.
Microsoft officially designates Windows 10 1809 as ready for broad deployment – ZDNet | DRUVAAN Softech.
Microsoft officially designates Windows 10 1809 as ready for broad deployment – Tech&Sport.
Microsoft abandonne les versions ‘Semi Annual Channel’ pour System Center – UniverSmartphone.
Microsoft is dropping ‘Semi Annual Channel’ characteristic releases for System Heart | Doers Nest.

Making Sense of Win 10’s Quality Update Cadence – Dam Good Admin

Leave a Reply Cancel reply.
Always On VPN Applications Automation (7) Client Upgrade Cloud Management Gateway ConfigMgr Configuration Manager (15) Cumulative Updates Distribution Point Intune Maintenance WIndows Microsoft Endpoint Manager Midwest Management Summit MMS Operating Systems OSD Powershell PullDP Reporting Server Group Patching Servicing Stack Updates Software Updates (20) Speaking Task Sequence Windows 10 Workgroup WSUS.
Recent Comments.
fixitjm on Yet Another Software Update Dashboard.
Hacienda del Patron on Intune Patching Part II: The Good, The Bad, The Ugly.
bryandam on Intune Patching Part 1: Human Translation.
Yeswanth Kumar on Intune Patching Part 1: Human Translation.
Ben G on Fully Automate Software Update Maintenance in Configuration Manager.
Always on VPN (1).
Applications (3).
Automation (7).
Client Upgrade (1).
Cloud Management Gateway (1).
Configuration Manager (16).
Content Distribution (2).
Intune (2).
Maintenance Windows (1).
Midwest Management Summit (1).
MMS (1).
Operating Systems (1).
Powershell (2).
Reporting (3).
Server Group Patching (3).
Software Updates (19).
Uncategorized (6).
Windows 10 (3).
WSUS (1).
© 2020 Dam Good Admin Theme by Anders Noren — Up ↑.

you need 12 x 2 core Windows Server DC license + CAL)

VMw are Cloud on Azure.
June 8, 2018 1 Comment I work for a global channel partner of Microsoft , VMware & AWS  and one of the teammates recently asked me the question whether VMware Cloud on Azure (similar solution to VMware Cloud on AWS) would be a reality.
It turned out that this was on the back of a statement from VMware CEO Pat where he supposedly mentioned “We have interest from our customers to expand our relationships with Google, Microsoft and others” & “We have announced some incremental expansions of those agreements“, which seems to have been represented in a CNBC article as that VMware cloud is coming to  Azure (Insinuating the reality of vSphere on Azure bare metal servers).
I’d sent my response back to the teammate outlining what I think of it and the reasoning for my thought process but I thought it would be good to get the thoughts of the wider community also as its a very relevant question for many, especially if you work in the channel, work for the said vendors or if you are a customer currently using the said technologies or planning on to moving to VMware Cloud on AWS.
Some context first,.
I’ve been following the whole VMWare Cloud on Azure discussion since it first broke out last year and ever since VMware Cloud on AWS (VMWonAWS) was announced, there were some noise from Microsoft, specifically Corey Sanders (Corporate vice president of Azure) about their own plans to build a VMWonAWS like solution inside Azure data centers.
Initially it looked like it was just a publicity stunt from MSFT to steal the thunder from AWS during the announcement of VMConAWS but later on, details emerged that, unlike VMWonAWS.

This was not a jointly engineered solution between VMware & Microsoft

but a standalone vSphere solution running on FlexPod (NetApp storage and Cisco UCS servers) managed by a VMware vCAN partner who happened to host their solution in the same Azure DC, with L3 connectivity to Azure Resource Manager.
Unlike VMWonAWS, there were no back door connectivity to the core Azure services, but only public API integration via internet.
It was also not supposed to run vSphere on native Azure b are metal servers unlike how it is when it comes to VMWonAWS.
All the details around these were available on 2 main blog posts, one from Corey @ MSFT (here) and another from Ajay Patel (SVP, cloud products at VMware) here but the contents on these 2 articles have since been changed to either something completely different or the original details were completely removed.
Before Corey’s post was modified number of times, he mentioned that they started working initially with the vCAN partner but later on, engaged VMware directly for discussions around potential tighter integration and at the same time, Ajay’s post (prior to being removed) also corroborated with the same.
But none of that info is there anymore and while the 2 companies are likely talking behind the scene for some collaboration no doubt, I am not sure whether its safe for anyone to assume they are working on a VMWonAWS like solution when it comes to Azure.  VMWonAWS is a genuinely integrated solution due to months and months of joint engineering and while VMware may have incentives to do something similar with Azure, it’s difficult to see the commercial or the PR benefit of such a joint solution to Microsoft as that would ruin their exiting messaging around AzureStack which is supposed to be their only & preferred Hybrid Cloud solution.
My thoughts!.
In my view, what Pat Gelsinger was saying above when he says (“we have interest from our customers to expand our relationship with Microsoft and others”) likely means something totally different to building a VMware Cloud on Azure in a way that runs vSphere stack on native Azure hardware.
VMware’s vision has always been Any Cloud, Any App, Any device which they announced at VMWorld 2016 (read the summary http://chansblog.com/vmworld-2016-us-key-annoucements-day-1/) and the aspiration (based in my understand ing at least) was to be the glue between all cloud platforms and on-premises which is a great one.
So when it comes to Azure, the only known plans (which are probably what Pat was alluding to below) were the 2 things as per below, To use NSX to bridge on-premises (& other cloud platforms ) to Azure by extending network adjacency right in to the Azure edge, in a similar way to how you can stretch networks to VMWonAWS.
NSX-T version 2.2.0 which GA’d on Wednesday the 6th of June can now support creating VMware virtual networks in Azure and being able to manage those networks within your NSX data center inventory.
All the details can be found here.
What Pat was probably doing was setting the scene for this announcement but it was not news, as that was on the roadmap for a long time since VMworld 2016.

This probably should not be taken as VMware on Azure bare metal is a reality

at least at this stage.
In addition to that, the VMware Cloud Services (VCS – A SaaS platform announced in VMworld 2017 – more details here) will have more integration with native AWS, native Azure and GCP which is also what Pat is hinting here when he says more integration with Azure, but that too was always on the roadmap.
At least that’s my t ak e on VMware’s plans and their future strategy.
Things can change in a flash as the IT m ark et is full of changes these days with so many competitors as well as co-petitors.
But I just cant see, at least in the immediate future, there being a genuine VMware Cloud on Azure solution that runs vSphere on bare metal Azure hardware, that is similar to VMWonAWS, despite what that article from CNBC seems to insinuate.
What do you all think.
Any insiders with additional knowledge or anyone with a different theory.
Keen to get people’s thoughts.
Azure, Microsoft, , VMWonAWS Ajay, , Azure, CNBC, Corey, , , VMWonAWS.

VMWonAzure Apple WWDC 2017 – Artificial Intelligence

Virtual Reality & Mixed Reality.
June 6, 2017 Leave a comment.
As a technologist, I like to stay close to key new developments & trends in the world of digital technology to understand how these can help users address common day to day problems more efficiently.
Digital disruption and technologies behind that such as Artificial Intelligence (AI), IoT, Virtual Reality (VR), Augmented Reality (AR) & Mixed Reality (MR) are hot topics as they have the potential to significantly reshape how consumers will consume products and services going forward.
I am a keen follower on these disruptive technologies because the potential impact they can have on traditional businesses in an increasingly digital, connected world is huge in my view.
Something I’ve heard today coming out of Apple, the largest tech vendor on the planet about how they intend on using various AI technologies along with VR and AR technologies in their next product upgrades across the iPhone, iPad, Apple Watch, App store, Mac…etc made me want to summarise those announcements and add my thoughts on how Apple will potentially lead the way to mass adoption of such digital technologies by many organisations of tomorrow.
Apple’s WWDC 2017 announcements.
I’ve been an Apple fan since the first iPhone launch as they have been the prime example when it comes to tech vendors who utilizes cutting edge IT technologies to provide an elegant solution to address day to day requirements in a simple and effective manner that providers a rich user experience.
I practically live on my iPhone every day for work and non-work related activities and also appreciate their other ecosystem products such as the MacBook, Apple watch, Apple TV and the iPad.
This is typically not because they are technologically so advanced, but simply because they provide a simple, seamless user experience when it comes to using them to increase my productivity during day to day activities.
So naturally I was keen on finding out about the latest announcements that came out of Apple’s latest World Wide Developer Conference event that was held earlier today in San Jose.
Having listened to the event and the announcements, I was excited by the new product and software upgrades announced but more than that, I was super excited about couple of related technology integrations Apple are coming out with which include a mix of AI, VR & AR to provide an even better user experience by integrating these technology advancements in to their product offerings.
Now before I go any further, I want to highlight this is NOT a summary of their new product announcements.
What interested me out of these announcements were not so much the new apple products, but mainly how Apple, as a pioneer in using cutting edge technologies to create positive user experiences like no other technology vendor on the planet, are going to be using these potentially revolutionary digital technologies to provide a hugely positive user experience.
This is relevant to every single business out there that manufacture a product, provides a service or solution offering to their customers as anyone can potentially look to incorporate the same capabilities in a similar or even a more creative and an innovative manner than Apple to provide a positive user experience in a similar fashion.
Use of Artificial Intelligence.
Today Apple announced the increased use of various AI technologies everywhere within the future apple products as summarised below Increased use of Artificial Intelligence technologies by the personal assistant “Siri”, to provide a more positive & a more personalised user experience In the upcoming version of the Watch OS 4 for Apple Watch, AI technologies such as Machine Learning is going to be used to power the new Siri face such that Siri can now provide you with dynamic updates that are specifically relevant to you and what you do (context awareness).
The new iOS 11 will include a new voice for Siri, which now uses Deep Learning Technologies (AI) behind the scene to offer a more natural and expressive voice that sounds less machine and more human.
Siri will also use Machine Learning on each device (“On device learning”) to understand specifically what’s more relevant to you based on what you do on your device so that more personalised interactions can be made by Siri – In other words, Siri is becoming more context aware thanks to Machine Learning to provide a truly personal assistant service unique to each user including predictive tips based on what you are likely to want to do / use next.
Siri will use Machine Learning to automatically memorise new words from the content you read (i.e.
News) so these words are now included on the dictionary & predictive texts automatically if you want to type them.
Use of Machine Learning in iOS 11 within the photo app to enable various new capabilities to make life easier with your photos Next version of Apple Mac OS, code named High Sierra, will supports additional features on the photo app including advanced face recognition capabilities which utilises AI technologies such as Advanced convolution Neural networks in order to let you group / filter your photos actually based on who’s on them.
Machine learning capabilities will also be used to automatically understand the context of each photo based on the content of the photo to identify photos from events such as sporting events, weddings…etc and automatically group them / create events / memories.
Using computer vision capabilities to create seamless loops on live photos.
Use of Machine Learning to activate palm rejection on the iPad during writing using the apple Pen.
Most Machine Learning capabilities are now available for 3rd party programmers via the iOS API’s such as Vision API (enables iOS app developers harness machine learning for face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, image registration), Natural Language API (provides language identification, tokenization, lemmatisation, part of speech, named entity recognition).
Introduction of Machine Learning Model Converter, 3rd party ML contents can be converted to native iOS 11 Core ML functions.
Use of Machine Learning to improve graphics on iOS 11 Another Mac OS high sierra updates will include Metal 2 (the Apple API that provides app developers near direct access to the GPU capabilities) that will now integrate Machine Learning to graphic processing to provide advanced graphical capabilities such as Metal performance shaders, Recurrent neural network kernels, binary convolution, dilated convolution, L-2 norm pooling, Dilated pooling etc.
Newly announced Mac Pro graphics powered by AMD Radeon Vega can provide up to 22 teraflops of half precision compute power which is specifically relevant for machine learning related content development.
Use of Virtual Reality & Augmented Reality.
Announcement on the introduction of Metal API for Virtual Reality to be used by developers – That includes Virtual Reality integration to Mac OS High Sierra Metal2 API to enable features such as VR-optimised display pipeline for video editing using VR and other related updates such as viewport arrays, system trace stereo timelines, GPU queue priorities, Frame debugger stereoscopic visualisation.
Availability of the ARKit for iOS 11 to create Augmented Reality straight out of iPhone using its camera and built in Machine Learning to identify contents on the live video, real time.
Use of IoT capabilities.
Apple Watch integration for bi directional information synchronisation between Apple Watch and ordinary gym equipment so that your apple watch will now act as an IoT gateway to your typical gym equipment’s like the Treadmill or the cross trainer to provide more accurate measurements from the apple watch and the gym equipment will adjust the workouts based on those readings.
Apple watch OS 4 will also provide core Bluetooth connectivity to other devices such as various healthcare tools that open the connectivity of those devices through the Apple Watch.
My thoughts.
The use cases for digital technologies, such as AI, AR & VR in a typical corporate or an enterprise environment to create a better product / service / solution offering like Apple has used them, is immense and is often only limited by one’s level of creativity & imaginations.
Many organisations around the world, from other tech or product vendors to Independent Software Vendors to an ordinary organisation like a high street shop or a supermarket can all benefit from the creative application of new digital technologies such as Artificial Intelligence, Augmented Reality and Internet Of Things in to their product / service / solution offerings to provide their customers with richer user experience as well as exciting new solutions.
Topics like AI and AR are hot topics in the industry and some organisations are already evaluating the use of them while some already benefit from some of these technologies made easily accessible to the enterprise through platforms such as public cloud (Microsoft Cortana Analytics and Azure Machine Learning capabilities available on Microsoft Azure for example) platforms.
But there are also a large number of organisations who are not yet fully investigating how these technologies can potentially make their business more innovative, differentiated or at the very least, more efficient.
If you belong to the latter group, I would highly encourage you to start thinking about how these technologies can be adopted by your business creatively.
This applies to any organisation of any size in the increasingly digitally connected world of today.
If you have a trusted partner for IT, I’d encourage you to talk to them about the same as the chances are that they will have more collective experience in helping similar businesses adopt such technologies which is more beneficial than trying to get their on your own, especially if you are new to it all.
Digital disruption is here to stay and Apple have just shown how advanced technologies that come out of digital disruption can be used to create better products / solutions for average customers.
Pretty soon, AI / AR / IoT backed capabilities will become the norm rather than the exception and how would your business compete if you are not adequately prepared to embrace them.
Keen to get your thoughts.
You can watch the recorded version of the Apple WWDC 2017 event here.
Image credit goes to #Apple #Apple #WWDC #2017 #AI #DeepLearning #MachineLearning #ComputerVision #IoT #AR #AugmentedReality #VR #VirtualReality #DigitalDisruption #Azure #Microsoft Artificial Intelligence, Azure 2017, AI, Apple, AR, AugmentedReality, Azure, ComputerVision, DeepLearning, DigitalDisruption, , MachineLearning, Microsoft, VirtualReality, VR, WWDC Time of the Hybrid Cloud?.
January 22, 2016 Leave a comment.
A little blog on something slightly less technical but equally important today.
Not a marketing piece but just my thoughts on something I came across that I thought would be worth writing something about.
I came across an interesting article this morning based on a Gartner research on last years global IT spend where it was revealed that global IT spent was down by about $216 Billion during 2015.
However during the same year data center IT spend was up by 1.8% and is forecasted to go up to 3% within 2016.
Everyone from IT vendors to resellers to every IT sales person you come across these days, on Internet blogs / news / LinkedIn or out in the field seem to believe (and make you believe) that the customer owned data center is dead for good and everything is or should be moving to the cloud (Public cloud that is).
If all that is true, it made me wonder how the data center spend went up when in fact that should have gone down.
One might think this data center spend itself was possibly fuelled by the growth in the public cloud infrastructure expansion due to increased demand on Public cloud platforms like Microsoft Azure and Amazon AWS.
Make total sense right.
Perhaps in the outset. But upon closer inspection, there’s a slightly complicated story, the way I see it.

P art 1 – Contribution from the Public cloud

Public cloud platforms like AWS are growing fast and aggressively and there’s no denying that.
They address a need in the industry to be able to use a global, shared platform that can scale infinitely on demand and due to the sheer economy of scale these shared platform providers have, customers benefit from cheaper IT costs, especially compared to having to spec up a data center for your occasional peak requirements (that may only be hit once a month) and having to pay for it all upfront regardless of the actual utilisation can be an expensive exercise for many.
With a Public cloud platform, the up front cost is cheaper and you pay per usage which makes it an attractive platform for many.
Sure there are more benefits of using a public cloud platform than just the cost factor, but essentially “the cost” has always been the most key underpinning driver for enterprises to adopt public cloud since its inception.
Most new start ups (Netflix’s of the world) and even some established enterprise customers who don’t have the baggage of legacy apps, (By legacy apps, I’m referring to client-server type of applications typically run on Microsoft Windows platform), are by default electing to predominantly use a cheaper Public cloud platform like AWS to locate their business application stack without owning their own data center kit.
This will continue to be the case for those customers and therefore will continue to drive the expansion of Public cloud platforms like AWS.
And I’m sure a significant portion of the growth of the data center spend in 2015 would have come from the increase of these pure Public cloud usage causing the cloud providers to buy yet more data center hardware.
P art 2 – Contribution from the “Other” cloud.
The point is however, not all the data center spend increment within 2015 would have come from just Public cloud platforms like AWS or Azure buying extra kit for their data centres. When you look at numbers from traditional hardware vendors, HP’s numbers appear to be up by around 25% for the year and others such as Dell, Cisco, EMC also appear to have grown their sales in 2015 which appear to have contributed towards this increased data center spend.  It is no secret that none of these public cloud platforms use traditional data center hardware vendors kit in their Public cloud data centres.  They often use commodity hardware or even build servers & networking equipment themselves (lot cheaper).
So  where would the increased sales for these vendors have come from.
My guess is that they likely have come from most enterprise customers deploying Hybrid Cloud solutions that involves customers own hardware being deployed in their own  / co-location / off prem / hosted data centres (customer still own their kit) along with using an enterprise friendly Public cloud platform (mostly Microsoft Azure or VMware vCloud Air) acting as just another segment of their overall data center strategy. If you consider most of the established enterprise customers, the chances are that they have lots of legacy applications that are not always cloud friendly.
By legacy applications, I mean typical WINTEL applications that typically conform to the client server architecture.
These apps would have started life in the enterprise since Windows NT / 2000 days and have grown with their business over time.
These applications are typically not cloud friendly (industry buzz word is “Cloud Native”) and often moving these as is on to a Public cloud platform like AWS or Azure is commercially or technically not feasible for most enterprises.
(I’ve been working in the industry since Windows 2000 days and I can assure you that these type of apps still make up a significant number out there).
And this “baggage” often prevents many enterprises from purely using just Public cloud (sure there are other things like compliance that gets in the way too of Public cloud but over time, Public cloud system will naturally begin to cater properly for compliance requirements…etc. so these obstacles would be short lived).
While a small number of those enterprises will have the engineering budget and the resources necessary to re-design and re-develop these legacy app stacks to be a more modern & cloud native stack, most of them will not have that luxury. Often such redevelopment work are expensive and most importantly, time consuming and disruptive.
So, for most of these customers, the immediate tactical solution is to resort to a Hybrid cloud solution where the legacy “baggage” app stack live on a legacy data center and all newly developed apps will likely be developed as cloud native (designed and developed from ground up) on an enterprise friendly Public cloud system such as Microsoft Azure or VMware vCloud Air.
An overarching IT operations management platform (industry buzz word “Cloud Management Platform”) will then manage both the customer owned (private) portion and the Public portion of the Hybrid cloud solution seamlessly (with caveats of course).
I think this is what has been happening in 2015 and this may also explain the growth of legacy hardware vendor sales at the same time. Since I work for a fairly large global reseller, I’ve witnessed this increased hardware sales first hand from the traditional data center hardware vendor partners (HP, Cisco…etc.) through our business too which adds up.
I believe this adoption of Hybrid cloud solutions will continue through out 2016 and possibly beyond for a good while, at least until such time that all legacy apps are eventually all phased out but that could be a long while away.
So there you have it.
In my view, Public cloud will continue to grow but if you think that it will replace customer owned data center kit anytime soon, that’s probably unlikely.
At least 2015 has proved that both Public cloud and Private cloud platforms (through the guise of Hybrid cloud) have grown together and my thoughts are that this will continue to be the case for a good while.
Who knows, I may well be proven wrong and within 6 months, AWZ & Azure & Google Public clouds will devour all private cloud platforms and everybody would be happy on just Public cloud :-).
But the common sense suggest otherwise.
I can see lot more Hybrid cloud deployments in the immediate future (at least few years) using mainly Microsoft Azure and VMware vCloud Air platforms.  Based on technologies available today, these 2 in my view stand out as probably the best suited Public cloud platforms with a strong Hybrid cloud compatibility given their already popular presence in the enterprise data center (for hosting legacy apps efficiently) as well as each having a good overarching cloud management platform that customers can use to manage their Hybrid Cloud environments with.
Thoughts and comments are welcome….!.
Azure, Microsoft, vCloud Air, Azure Pack, Azure Stack, Hybrid Cloud, Microsoft Azure, Private Cloud, Vmware vCloud Air, vRealize Microsoft Windows Server 2016 Licensing – Impact on Private Cloud / Virtualisation Platforms.
December 6, 2015 4 Comments It looks like the guys at the Redmond campus have released a brand new licensing model for Windows Server 2016 (currently on technical preview 4, due to be released in 2016).
I’ve had a quick look as Microsoft licensing has always been an important matter, especially when it comes to datacentre virtualisation and private cloud platforms. Unfortunately I cannot say I’m impressed from what I’ve seen (quite the opposite actually) and the new licensing is going to sting most customers, especially those customers that host private cloud or large VMware / Hyper-V clusters with high density servers.
What’s new (Licensing wise)?.
Here are the 2 key licensing changes.

From Windows Server 2016 onwards

licensing for all editions (Standard and Datacenter) will now be based on physical cores, per CPU.
A minimum of 16 core license (sold in packs of 2, so a minimum of 8 licenses to cover 16 cores) is required per each physical server. This can cover either 2 processors with 8 cores each or a single processor with 16 cores in the server.
Note that this is the minimum you can buy.
If your server has additional cores, you need to buy additional licenses in packs of 2.
So for a dual socket server with 12 cores in each socket, you need 12 x 2 core Windows Server DC license + CAL).
The most obvious change is the announcement of core based Windows server licensing.
Yeah you read it correct…!.
Microsoft is jumping on the increasing core count availability in the modern processors and trying to cache in on it by removing their socket based licensing approach that’s been in place for over a decade and introducing a core based license instead.
And they don’t stop there….

One might expect if they switch to a CPU core based licensing model

that those with smaller cores per CPU socket (4 or 6) would benefit from it, right.
Wrong….!!! By introducing a mandatory minimum number of cores you need to license per server (regardless of the actual physical core count available in each CPU of the server), they are also making you pay a guaranteed minimum licensing fee for every server (almost as a guaranteed minimum income per server which at worst, .

Would be the same as Windows server 2012 licensing revenue based on CPU sockets)

Now Microsoft has said that the cost of each license (covers 2 cores) would be priced at  1/8th the cost of a 2 processor license for corresponding 2012 R2 license.
In my view, that’s just a deliberate smoke screen which is aimed at making it look like they are keeping the effective Windows Server 2016 licensing costs same as they were on Windows Server 2012, but in reality, only for small number of server configurations (servers with up to 8 cores per server which no one use really anymore as most new servers in the datacentre, especially those that would run some form of a Hypervisor would typically use 10/12/16 core CPUs these days). See the below screenshot (taken from the Windows 2016 licensing datasheet published by Microsoft) to understand where this new licensing model will introduce additional costs and where it wont.
The difference in cost to customers.
Take the following scenario for example.
You have a cluster of 5 VMware ESXi / Microsoft Hyper-V hosts each with 2 x 16core Intel E5-4667 or an Intel E7-8860 range of CPU’s per server.
Lets ignore the cost of CAL for the sake of simplicity (you need to buy CAL’s under existing 2012 licensing too anyway) and take in to account the list price of a Windows to compare the effect of the new 2016 licensing model on your cluster.

List price of Windows Server 2012 R2 Datacenter SKU = $6,155.00 (per 2 CPU sockets)

Cost of 2 core license pack for Windows server 2016 (1/8th the cost or W2K12 as above) = $6,155.00 / 8 = $769.37.
The total cost to license 5 nodes in the hypervisor cluster for full VM migration (VMotion / Live migration) across all hosts would be as follows Before (with Windows 2012 licensing) = $6,155.00 x 5 = $30,775.00.
After (with Windows 2016 licensing) = $769.37 x 16 x 5 = $61,549.60.
Now obviously these numbers are not important (they are just list prices, customers actually pay heavily discounted prices).
But what is important is the percentage of the price increase which is a whopping 199.99% compared to current Microsoft licensing costs….
This is absurd in my view……!! The most absurd part of it is the fact that having to license every underlying CPU in every hypervisor host within the cluster with the windows server license (often with datacentre license) under the current license model was already absurd enough anyway. Even though a VM will only ever run on a single hosts’ CPU at any given time,  Microsoft’s strict stance on immobility of Windows licenses meant that any virtualisation / private cloud customer had to license all the CPU’s in the underlying hypervisor cluster to run a single VM, which meant that allocating a Windows Server Datacenter license to cover every CPU socket in the cluster was indirectly enforced by Microsoft, despite how absurd it was in this cloud day and age.
And now they are effectively taxing you on the core count too?.
That’s possibly not short of a day light robbery scenario for those Microsoft customers.
FYI – Given below is the approximate percentage increment of the Windows Server licensing for any virtualisation / private cloud customer with any more than 8 cores per CPU in a typical 5 server cluster where VM mobility through VMware VMotion or Hyper-V Live Migration across all the hosts is enabled as standard.
Dual CPU server with 10 cores per CPU = 125% Increment.
Dual CPU server with 12 cores per CPU = 150% Increment.
Dual CPU server with 14 cores per CPU = 175% Increment.
Dual CPU server with 18 cores per CPU = 225% Increment.
Now this is based on todays technology.
No doubt that the CPU core count is going to grow further and with it, the price increment is only just going to get more and more ridiculous.
My Take.
It is pretty obvious what MS is attempting to achieve here.

With the ever increasing core count in CPUs

2 CPU server configurations are becoming (if not have already) the norm for lots of datacentre deployments and rather than be content with selling a datacentre license + CAL to cover the 2 CPUs in each server, they are now trying to benefit from  every additional core that Moore’s law inevitably introduce on to the newer generation of CPUs.
We are already having 12 core processors becoming the norm in most corporate and enterprise datacentres where virtualisation on 2 socket servers with 12 or more is becoming the standard. (14, 16, 18 cores per socket are not rare anymore with the Intel Xeon E5 & E7 range for example).
I think this is a shocking move from Microsoft and I cannot quite see any justifiable reason as to why they’ve done this, other than pure greed and complete and utter disregard for their customers… As much as I’ve loved Microsoft Windows as an easy to use platform of choice for application servers over the last 15 odd years, I for once, will now be looking to advise my customers to strategically put in plans to move away from Windows as it is going to be price prohibitive for most, especially if you are going to have an on-premise datacentre with some sort of virtualisation (which most do) going forward.
Many customers have successfully standardised their enterprise datacentre on the much cheaper LAMP stack (Linux platform) as the preferred guest OS of choice for their server & Application stack already anyway.
Typically, new start-up’s (who don’t have the burden of legacy windows apps) or large enterprises (with sufficient man power with Linux skills) have managed to do this successfully so far but I  think if this expensive Windows Server licensing does stay on, lost of other folks who’s traditionally been happy and comfortable with their legacy Windows knowledge (and therefore learnt to tolerate the already absurd Windows Server licensing costs) will now be forced to consider an alternative platform (or move 100% to public cloud).
If you retain your workload on-prem, Linux will naturally be the best choice available.  For most enterprise customers, continuing to run their private cloud / own data centres using Windows servers / VMs on high capacity hypervisor nodes is going to be price prohibitive.
In my view, most of the current Microsoft Windows Server customers remained Microsoft Windows Server customers not by choice but mainly by necessity, due to the baggage of legacy Windows apps / familiarity they’ve all accumulated over the years and any attempt to move away from that would have been too complex / risky / time consuming….
However now, I think it has come to a point now where most customers are forced to re-write their app stack from ground up due to the way public cloud systems work….etc.
and while they are at it, it makes sense to chose a less expensive OS stack for those apps saving a bucket load of un-necessary costs in Windows Server licensing.
So possibly the time is right to bite the bullet and get on with embracing Linux?.
So, my advise for customers is as follows Tactical: Voice your displeasure at this new licensing model: Use all means available, including your Microsoft account manager, reseller, distributor, OEM vendor, social media….etc.
The more of a collective noise we all make, the louder it will collectively be heard (hopefully) by the powers at Microsoft.
Get yourself in to a Microsoft ELA for a reasonable length OR add Software Assurance (Pronto): If you have an ELA, MS have said they will let people carry on buying per processor licenses until the end of the ELA term.
So essentially that will let you lock yourself in under the current Server 2012 licensing terms for a reasonable length of time until you figure out what to do.
Alternatively, if you have SA, at the end of the SA term, MS will also let you define the total number of cores covered under the current per CPU licensing and will grant you an equal number of per core licenses so you are effectively not paying more for what you already have.
You may also want to enquire over provisioning / over buying your per proc licenses along with SA now itself for any known future requirements, in order to save costs.
Strategic: Put in a plan to move your entire workload on to public cloud: This is probably the easiest approach but not necessarily the smartest, especially if its better for you to host your own Datacenter given your requirements.
Also, even if you plan to move to public cloud, there’s no guarantee whether any other public cloud provider other than Microsoft Azure would be commercially viable to run Windows workloads, in case MS change the SPLA terms for 2016 too).
Put in a plan to move away from Windows to a different, cheaper platform for your workload: This is probably the best and the safest approach.
Many customers would have evaluated this at some point in the past but would have shied away from it as its a big change, and require people with the right skills.
Platforms like Linux have been enterprise ready for a long time now and there are a reasonable pool of skills in the market.
And if your on-premise environment is standardised on Linux, you can easily port your application over to many cheap public cloud portals too which are typically much cheaper than running on Windows VMs.
You are then also able to deploy true cloud native applications and also benefit from many open source tools and technologies that seem to be making a real difference in the efficiency of IT for your business.
This article and the views expressed in it are mine alone.
Comments / Thoughts are welcome P.
This kind of remind me of the vRAM tax that VMware tried to introduce a while back which monumentally backfired on them and VMware had to completely scrap that plan.
I hope enough customer pressure would hopefully cause Microsoft to back off too….
Email * Licensing, Microsoft, , vSphere, Windows 2016 New Windows 2016 Licensing.

New Windows Server 2016 Licensing

ridiculous Windows 2016 licensing, Server 2016, Windows 2016, Windows 2016 Licensing, Windows 2016 licensing on Hyper-V, Windows 2016 licensing on VMware, Windows Server 2016, Windows Server 2016 Licensing.

Unzipping files in Powershell scripts

Tag: Windows.
Unzipping files in Powershell scripts.
I’ve been working for some time on a project which is deploying a complex application to a client’s servers.

This project relies on Powershell scripts to push zip files to servers

unzip those files on the servers and then install the MSI files contained within them.
The zip files are frequently large (up to 900MB) and the time taken to unzip the files is causing problems with our automated installation software (Tivoli) due to timeouts.

The scripts are currently unzipped using the Copyhere method

Simple tests on a Windows 8 PC with 8GB RAM and an 8 core processor although a single SATA hard drive show that this method is “disk intensive” and disk utilisation as viewed in Task Manager “flatlines” at 100% during the extraction.
I spent some time looking at alternatives to the “Copyhere” method to unzip files to reduce the time taken for deployments and reduce the risk of Tivoli timeouts which were affecting the project.

Method A series of test files were produced using a test utility (FSTFIL

EXE), .

FSTFIL creates test files made up of random data

These files are difficult to compress due to the fact that they contain little or no “whitespace” or repeating characters, similar to the already compressed MSI files which make up our deployment packages.
Files were created that were 100MB, 200MB, 300MB, 400MB and 500MB.

Each of these files were zipped into similar sized ZIP files

As well as this a single large ZIP files containing each of the test files was also created.
Tests were performed to establish the time taken to decompress increasingly large ZIP files.
Test were performed to establish whether alternative decompression (unzip) techniques were faster.

Observations The effect of filesize on CopyHere unzips Despite initial observations

after averaging out the time taken to decompress different sized files using the CopyHere method the time taken to decompress increasingly larger files was found to be linear.

The difference between CopyHere and ExtractToDirectory unzips To do this comparison

two PowerShell scripts were written.

Each script unzipped the same file (a 1.5GB ZIP file containing each of the 100MB

200MB, 300MB, 400MB and 500MB test files described earlier).
Each script calculated the elapsed time for each extract, this was recorded for analysis.
Unzips took place alternately using one of the two techniques to ensure that resource utilisation on the test PC was comparable for each test.
No detailed performance monitoring was carried out during the first tests, but both CPU and disk utilisation was observed to be higher (seen in Task Manager) when using the CopyHere method.
Conclusion The ExtractToDirectory method introduced in.

Net Framework 4.5 is considerably more efficient when UNZIPPING packages

Assuming that this method is not available, alternative techniques to unzip the packages, possibly including the use of “self extracting .exe” files, the use of RAM disks  or memory-mapped files to remove disk bottlenecks or more modern decompression techniques may reduce the risk of Tivoli timeouts and increase the likelihood of successful deployments.

Powershell scripts used November 18

2013November 18, 2013 , , Powershell, , 2 Comments on Unzipping files in Powershell scripts DMA2200 Media Center problem – resolved.
For the last few weeks, .

I’ve been having problems with my Linksys Media Extender

This has been causing me a great deal of grief because this is the only method currently of watching TV in bed (I have a Windows 7 Media Center PC in the front room and serve recorded TV as well as the satellite feeds up to the extender in the bedroom).
I’ve noticed that occasionally the Media Center Extender freezes up and becomes unresponsive to the remote control.
I replaced the batteries in the remote (twice) but the problem remained.
The Extender would happily continue displaying the channel that I had started watching but wouldn’t allow me to change.
Occasionally I’d come back to the Extender to find that the screen was blank and the extender was unresponsive.
Restarting the Extender has always temporarily resolved this issue, but in the last few days, the problem has become more acute and the extender has generally only remained stable for a few minutes before crashing again.
Since nobody is manufacturing extenders any more (until Ceton releases their new media center later this year) , this could have meant that I needed to drop an aerial into the bedroom and lose the ability to watch recorded TV.
Trawling the Internet for clues to the problem I came across this article, which although it was 2 years old and written in the US, gave me a nugget of information which fixed my problem.
http://experts.windows.com/frms/windows_entertainment_and_connected_home/f/116/p/95097/496301.aspx (Sadly not there anymore….) It appears that the extender tries to connect to a Cisco or Linksys server to obtain an update.
Since support for this product has now been withdrawn it is likely that the server is now offline.
My theory is that the problem hit US users a few years ago and perhaps only recently has the same problem surfaced in my region when the UK/European regional servers were decommissioned.
My connection was wired, but I presume that the same steps could be used to fix a problem with a wired DMA2200 (or DMA2100) extender.
Steps taken: Set a static IP address for the media center extender.
Enter for both primary and secondary DNS servers.
This prevents the extender from accessing the internet and then crashing when it fails to find the relevant update servers.
August 18, 2012October 8, 2019 , Media Center, 2 Comments on DMA2200 Media Center problem – resolved.

Tech Plastics (UDCT) and a Windows enthusiast

///Windows 10 Remote Desktop App UWP Client Version Latest Update Available For Download And Install For Insider Participant s Contains Many New Features Windows 10 Remote Desktop App UWP Client Version Latest Update Available For Download And Install For Insider Participants Contains Many New Features.
2 minutes read Windows 10 Microsoft has released the latest version of the Windows 10 Remote Desktop App.
The Client side of the Universal Windows Platform App (UWP) has been completely rewritten with aspects such as universal compatibility, speed, reliability, and performance aspects, given precedence.
There are several new features and functionalities that Windows 10 OS users running the Client edition need.

Microsoft has released a major update for the UWP version of its own Remote Desktop App

The update includes some much needed and important features including complete Dark Mode, ARM64 and x64 support, and better functioning with files, Azure Directory , etc.
Windows 10 Remote Desktop App UWP Client Version Latest Update Released For Windows Insider Participant s:.
Microsoft has released, what it claims, .

Is a completely re-written Windows 10 Remote Desktop App UWP Client

The UWP app now uses the same underlying RDP core engine as the iOS

macOS, and Android clients.
The program also supports ARM64 CPUs.

The Azure Resource Manager integrated version of Windows Virtual Desktop

and a dark/light mode.

The update brings up the version of the Remote Desktop App to 10.2.1519

In the latest version, Microsoft has added the ability to create backups of desktop environment s and then restore them.
The new UWP RDC application can now automatically detect whether the user is using a new or classic version of the Windows Virtual Desktop.

Microsoft has also addressed a few bugs

From now on UWP Client Tool users shouldn’t have problems copying files from local storage .

Microsoft assures all buttons should work correctly again

Windows 10’s UWP Remote Desktop app just got a massive update https://t.co/84YiyDo2Qr— Ed Glogowski (@edyg023) August 22, 2020Here’s the changelog of the new Remote Desktop UWP Client: Rewrote the client to use the same underlying RDP core engine as the iOS, macOS, and Android clients.
Added support for the Azure Resource Manager-integrated version of Windows Virtual Desktop.
Added support for x64 and ARM64.
Updated the side panel design to work with the full screen.
Added support for light and dark modes.
Added functionality to subscribe and connect to sovereign cloud deployments.
Added functionality to enable backup and restore of workspaces (bookmarks) in release to manufacturing (RTM).
Updated functionality to use existing Azure Active Directory (Azure AD) tokens during the subscription process to reduce the number of times users must sign in.
Updated subscription can now detect whether you’re using Windows Virtual Desktop or Windows Virtual Desktop (classic).
Fixed issue with copying files to remote PCs.
Fixed commonly reported accessibility issues with buttons.

Stable Version Of UWP Remote Desktop Client Tool Expected Soon?

The latest version of Windows 10 Remote Desktop App UWP Client is currently available only to the participants of the Windows Insider Program.
The UWP variant of the Remote Desktop application can be downloaded from the official Microsoft Store.
Microsoft rewrites Remote Desktop UWP app – https://t.co/78AucwiSup pic.twitter.com/TRWuW7qdc7— MSPoweruser (@mspoweruser) August 21, 2020It is not clear when the stable version of the UWP Remote Desktop Client Tool would become available to regular users of Windows 10 OS.
Microsoft has urged those enrolled in Insider testing for apps to test the program and report any problems.

If the Windows Insider participants do not report any major issues

the latest update of the UWP Remote Desktop Client Tool, with the additional functions, should be delivered to all other users in the next few weeks.
Windows Insider |.
A B.
Tech Plastics (UDCT) and a Windows enthusiast.
Optimizing the OS, exploring software, searching and deploying solutions to strange and weird issues is Alap’s main interest.
Please enable JavaScript to view comments powered by Disqus.
Copyright © 2014-2020 All Rights ReservedAppuals.com | Unit 21234, PO Box 7169, Poole, BH15 9EL, UK | | | | | | |.