Enterprise Content Management

Armedia Blog

How to Upgrade Windows Server 2008 R2 Core Domain Controller to Windows Server 2012 Core

March 2nd, 2015 by Paul Combs

The upgrade path should be as simple as upgrading Windows Server 2008 R2 Server-Core Domain Controller to Windows Server 2012 Core.  However, this is not the case. Most Internet solutions will write of this upgrade path without Active Directory services. This is an important distinction as this upgrade path will fail with a black screen with cursor and then a rollback. On a development virtual machine an upgrade path that worked was Windows Server 2008 R2 Core Domain Controller to Windows Server 2012 R2 Core. As this is NOT the desired path, a work-a-round had to be determined as well as determining the reason why the Windows Server 2012 R2 Core path worked where the Windows Server 2012 Core path failed.

While researching this problem, a Microsoft KB 2843034 article was found to describe the problem accurately and offer a “solution”. Microsoft summarizes the problem as “… specific to server-core enabled domain controllers that are in-place upgraded to Windows Server 2012 server core. This condition does not occur on GUI or Full-DCs that are in-place upgraded to Windows Server 2012.” The problem is narrowed to “[t]he DirectoryServices-DomainController role [which] is disabled by default and is not enabled because there is no role with that name on the Windows Server 2008 R2 operating system. Since there is nothing to match up among the available Windows Server 2012 manifests, the upgrade hangs.”

Now for the Microsoft “solution”. To make an in-place upgrade succeed add a “Replacement Manifest”, DirectoryServices-DomainController-ServerCoreUpg-Replacement.man, to the setup source files. “Please contact Microsoft Customer Technical Support to retrieve the manifest. Ensure to reference this article so the agent can provide you with the manifest file free of charge.”

Not quite the solution that was sought. However, there was something to that solution that led to the next course of action. How did the Windows Server 2012 R2 succeed where the Windows Server 2012 had failed? It must have had the manifest necessary to succeed. To determine if the Windows Server 2012 R2 had the DirectoryServices-DomainController-ServerCoreUpg-Replacement.man file, the ISO image was opened and then navigated to the sources\replacementmanifests\ folder. The manifest is there. It is not on the Windows Server 2012 ISO.

Armed with this knowledge, the solution is to extract the sources\replacementmanifests\DomainController-ServerCoreUpg-Replacement.man file from the Windows Server 2012 R2 DVD or ISO and copy it to the same location to the Windows Server 2012 DVD or ISO. Perform the upgrade and watch in amazement and bewilderment as the upgrade process not only continues past the black screen, however completes successfully.

Documentum Webtop Musings

February 3rd, 2015 by Scott Roth

Webtop has been Documentum’s flagship user interface (UI) since its introduction in Documentum v5 (circa 2003), and has an enormous world-wide install base. It’s built upon solid (though dated) technology, methodology, and standards. It’s also built upon/with a solid API (the WDK), which allows developers to do ANYTHING with Webtop — including replace it completely with a custom UI. One of my favorite features of Webtop is that it does EVERYTHING. In fact, this is often the reason many end users (i.e., customers) don’t like Webtop — it’s overwhelming. More often than not, we disable and hide capabilities and features in Webtop to make it more palatable for end users.

So, with all of this capability and installed user-base, you’d think EMC would enhance/upgrade/extend their flagship UI, right? Instead, they end-of-life it (Support for v6.7 SP2 ends April 2015. This was recently extend to Dec 2018 with the release of Webtop v6.8.) and are replacing it with one of two new clients: D2 or xCP. Worse, they provide no clear technical migration path from Webtop to either D2 or xCP, and provide no clear criteria to choose one client over the other. I know, I’ve heard the same guidelines that you have: if the application is “document-centric” it should move to D2; if the application is “process-centric” it should move to xCP. Well, it’s never that easy. Many of our customers have applications they target for development in D2 (or xCP) because it meets one of these guidelines. However, once it is deployed, they started looking at the next (and the next) application to convert/rewrite/develop. Often these next applications contain elements better suited for the other platform. Now they have a problem: they want to build all of their applications on the same platform to minimize maintenance and maximize their investment, but they are stuck trying to force-fit an application’s requirements into a platform’s mis-aligned capabilities (or lack of capabilities). If they had remained with Webtop, they could achieve both types of applications (i.e., document-centric and process-centric) on a single platform. Of course, you lose the “no code” configurability of D2 and xCP and trade it in for full-blown Java development with Webtop.

As I mentioned before, out-of-the-box, Webtop does EVERYTHING. The thing that gripes me the most about xCP and D2 is that out-of-the-box they do NOTHING. Nothing! After installing the client you still have a long road ahead of you just to see your cabinets and folders, and create a few objects in the Docbase. Out-of-the-box, Webtop works. Why doesn’t EMC invest in “Web 2.0-ifying” Webtop? They could rebuild it on Spring, using jSON and Ajax, DFS, REST, or whatever the framework de jour is.   And, provide a migration path from the “classic” Webtop to this new creation.   Many of these technologies provide the “configuration” conveniences they are striving for in D2 and xCP. For example, look at what Armedia is doing with it’s ArkCase Management System. ArkCase is repository neutral and offers a UI that is elegant, responsive, and highly configurable while using current Web 2.0 technologies to achieve the highest level of re-usability and abstraction. Or take a look at CARA by Generis. CARA is a Webtop/D2 alternative and is gaining rapid acceptance for it’s elegance, flexibility, ease of use, and adaptability. Are these examples of what Webtop should be, could be?

Come on EMC, revive Webtop and restore it to its flagship status.  Don’t just limp it along with periodic maintenance releases and force your user base onto D2 or xCP when they don’t need to or want to.

 

Finding Similar Documents Without a Full Text Index

January 27th, 2015 by Scott Roth

Is there a way to quickly find similar documents in a Documentum repository? Yes, there is. One approach could be to use the Lucene MoreLikeThis() API. This API call to the Lucene full text search engine extracts what it believes to be the most salient words from a document and runs a full text search looking for documents whose content matches the chosen query words. But what if there was a simpler, lighter-weight approach?

In my 2014 EMC Knowledge Sharing whitepaper, Finding Similar Documents Without Using A Full Text Index, I detail an approach for identifying similar documents in a Documentum repository by using a 64-bit hash value. This hash value, called the Similarity Index (SI), is a product of a hashing function named SimHash[1]. This 64-bit value is applied with an Aspect to an object as metadata. This hash value can then be queried to find content that is similar to a given document’s Similarity Index. For example, you could execute a DQL query like this to discover content that shares 80% similarity with a selected document:

select similar_obj_id from dbo.si_view where r_object_id ='[r_object_id]' and similarity >= 0.80

Where [r_object_id] is the object ID of a known object.

Using queries like this, content can be discovered which meets a varying degree of similarity. In this example, the query would return any document which is 80% similar to the selected document. For finer results, you could query for content which has 90% similarity.

The details for implementing this solution are discussed in the whitepaper. The most interesting elements of the solution are the SimHash function itself, and the relationship between the Aspect, which stores and evaluates the SI, and a registered database view that makes searching possible.

If you are intrigued, I encourage you to download the whitepaper.

[1] Moses Charikar, http://www.cs.princeton.edu/courses/archive/spr04/cos598B/bib/CharikarEstim.pdf

VIDEO: Armedia Case Manager – Editing Your User Profile

January 20th, 2015 by Allison Cotney

We have uploaded a new video to the Armedia YouTube Channel! In this video we demonstrate how users can update or change their profile information within Armedia Case Manager.

These changes could include items such as groups that the user belongs to or subscriptions the user may have. And of course, the user profile picture is always customizable.

 

 

Stay tuned for more Armedia Case Manager Videos!!

VIDEO – Armedia Case Manager: Generating a Report

January 16th, 2015 by Allison Cotney

Check out our new video blog giving you an inside look at Armedia Case Manager!

In this post, Ronda Ringo demonstrates how users can generate a report within Armedia Case Manager.

Stay tuned for more videos coming soon! To see all of our Armedia Case Manager Videos, CLICK HERE.

VIDEO: Armedia Case Manager: The Dashboard

January 15th, 2015 by Allison Cotney

In today’s video, we give you a tour of the Armedia Case Manager configurable dashboard. This dashboard provides a quick an easy way for users to access commonly needed components of their case management solution.

The dashboard is also customizable so that users can put things in an order that makes sense for their needs.

Stay tuned for more Armedia Case Manager videos coming next week!!

Create a Self-Extracting Installer in Linux

November 19th, 2014 by Paul Combs

Ultimately many would want to write complex RPMs to install software packages, however, those whom are accountable for writing such packages may agree that this task may be cumbersome and impractical. Where the RPM may accomplish the goal of wrapping an application in some sort of container for distribution, this goal is also possible by using a self-extracting archive or an installer which can launch an embedded script. This is where a utility called makeself comes in. Makeself is described as “a small shell script that generates a self-extractable tar.gz archive from a directory. The resulting file appears as a shell script (many of those have a .run suffix), and can be launched as is.

Install makeself

cd /opt/app
wget http://megastep.org/makeself/makeself-2.1.5.run
chmod 755 makeself-2.1.5.run
./makeself-2.1.5.run
cd makeself-2.1.5
cp *.sh /usr/bin

Example
Suppose you want to package and distribute a version of libevent2 for several CentOS 6 servers. Here is one way.

mkdir rpmfile
cd rpmfile

wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/aevseev/CentOS_CentOS-6/x86_64/libevent2-2.0.21-1.1.x86_64.rpm
wget ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/aevseev/CentOS_CentOS-6/x86_64/libevent-devel-2.0.21-1.1.x86_64.rpm

echo '#!/bin/bash
yum install -y libevent2-2.0.21-1.1.x86_64.rpm libevent-devel-2.0.21-1.1.x86_64.rpm' > libevent-install.sh
chmod +x libevent-install.sh
cd ..

Command

#makeself.sh ./DIRECTORY ./FINAL-PRODUCT-NAME.run "SFX installer for COMMENT HERE" ./FILE-IN-DIRECTORY-TO-RUN.sh
makeself.sh ./rpmfile ./libevent-devel.run "SFX installer for libevent-devel (2.0.21)" ./libevent-install.sh

Use the .run file

./libevent-devel.run

Source(s)
http://xmodulo.com/how-to-create-a-self-extracting-archive-or-installer-in-linux.html
http://megastep.org/makeself/

CMIS Integration – Integrating FileNet with SharePoint 2013

October 17th, 2014 by Ben Chevallereau

Recently, our team has been working on a series of CMIS Integrations. This video demonstrates the use of the CMIS components that we developed and used to integrate FileNet with SharePoint 2013. This integration has been packaged into SharePoint. During the video, you’ll see how to connect to FileNet, to browse the repository, to create folder, to create documents and as well to preview documents and to download documents.

Predictive Analytics and The Most Important Thing

October 15th, 2014 by Jim Nasr

It turns out I was wrong…which happens at an alarmingly increasing rate these days—though I chalk that to a thirst to challenge myself…errr, my story!

So, for a while now, I had convinced myself that I knew what the most important thing was about successfully doing predictive analytics: accuracy of data and the model (separating the noise). Veracity, as they say. In working with a few clients lately though, I no longer think that’s the case. Seems the most important thing is actually the first thing: What is the thing you want to know? The Question.

As technologist we often tend to over-complicate and possibly over-engineer. And it’s easy to make predictive analytics focus on the how; the myriad of ways to integrate large volumes and exotic varieties of data, the many statistical models to evaluate for fit, the integration of the technology components, the visualization techniques used to best surface results, etc. All of that has its place. But ultimately, first, and most importantly, we need to articulate the business problem and the question we want answers for.

What do we want to achieve from the analytics? How will the results help us make a decision?

Easy as that sounds, in practice it is not particularly easy to articulate the business question. It requires a real understanding of the business, its underlying operations, data and analytics and what would really move the meter. There is a need to marry the subject matter expert (say, the line of business owner) with a quant or a data scientist and facilitate the conversation. This is where we figure out the general shape and size of the result and why it would matter; also, what data (internal and external) feeds into it.

Articulating The Question engages the rest of the machinery. Answers are the outcome we care about. The process and the machinery (see below for how we do it) give us repeatability and ways to experiment with both asking questions and getting answers.

Armedia Predictive Analytics Process

Armedia Predictive Analytics process for getting from The Question to answers

Copyright © 2002–2011, Armedia. All Rights Reserved.