Browse Source

[MERGE #408] Get GCStress building on Linux

Merge pull request #408 from digitalinfinity:linux_gcstress
This change gets GCStress building and running on Linux.
The harness doesn't pass yet- we segfault in the GC but the change is large enough already that I wanted to merge it in before proceeding further.
Since it's a large change, I've broken it up into the following 7 commits. They're mostly accurate but sometimes changes that should be in a particular commit end up in a later commit.

1. Update CMakeLists.txt
2. Add more #defines to control features
3. Add more of the PAL to build GCStress
4. Add amd64_save_registers.S
5. Added macro FLAG_STRING to ConfigFlagsList.
6. Build script for Unix platforms
7. Convert strings to use CH_WSTR

Details of changes:

**1. Update CMakeLists.txt**
Update CMakeLists with the following changes:
1) Add support for Debug Builds and building ASM files
2) Disable the JIT in linux builds explicitly
3) Remove dependency on PAL_STDCPP_COMPAT

**2. Add more #defines to control features**
Added a couple more ifdefs to control whether features are compiled in or not (they're not on linux). The new compile definitions are the following:
CONFIG_CONSOLE_AVAILABLE - Whether -Console config option is available
CONFIG_PARSE_CONFIG_FILE - Whether jscript.config should be parsed or not.
CONFIG_RICH_TRACE_FORMAT - Whether -RichTraceFormat config option is available
SYSINFO_IMAGE_BASE_AVAILABLE - Whether the linker emits the __ImageBase global variable
ENABLE_BACKGROUND_PAGE_FREEING - Whether the Page allocator supports freeing pages on a background thread
ENABLE_BACKGROUND_PAGE_ZEROING - Whether the Page allocator supports zeroing pages on a background thread
ENABLE_BACKGROUND_JOB_PROCESSOR - Whether the BackgroundJobProcessor is enabled
ENABLE_RECYCLER_TYPE_TRACKING - Whether the recycler keeps track of the type of objects allocated (for debugging/profiling)

The following existing #ifdefs were also disabled on Linux builds:
```
FAULT_INJECTION
GENERATE_DUMP
PROFILE_EXEC
PROFILE_RECYCLER_ALLOC
PROFILE_DICTIONARY
PDATA_ENABLED
HEAP_TRACK_ALLOC
```

I also cleaned up the STACK_BACK_TRACE macro to be used to disable more code that depended on capturing stack traces.

Finally, fixed some build errors on DEBUG builds

**3. Add more of the PAL to build GCStress**
This change brings in more of the PAL so that GCStress can build. Added a call to initialize the PAL runtime in AutoSystemInfo (the shutdown call is in main).
Added a #define to define priority for our static globals to simulate VC++'s #pragma init_seg

The PAL change also includes a dependency on libsodium for cryptographically secure random number generation- to install the package, run the following commands:
```
sudo apt-get install libsodium13
sudo apt-get install libsodium-dev
```

Specific comments on the PAL changes:
Bought in a large portion of the PAL + some AMD64 specific pieces
Added a new PAL_GLOBAL macro and annotated a bunch of statics/globals in the following files to ensure that they get initialized before our static variables:
```
cruntime\file.cpp
file\file.cpp
handlemgr\handleapi.cpp
init\pal.cpp
map\map.cpp, virtual.cpp
misc\dbgmsg.cpp
objmgr\shmobjectmanager.cpp
shmemory\shmemory.cpp
synchmgr\synchmanager.cpp
synchobj\mutex.cpp
thread\process.cpp, thread.cpp

```
Added in implementation for rand_s in misc\Random.cpp. This has a dependency on libsodium.

Other minor code changes in following files:
cruntime\file.cpp - remove unused headers
exception\seh-unwind.cpp - remove unused headers
file\file.cpp - Change call from GetFileAttributes to GetFileAttributesA in case where we were passing in an ANSI string
include\pal\misc.h - Add declaration to initialize the random number subsystem
init\pal.cpp - Rename PAL_InitializeCoreCLR to PAL_InitializeChakraCore
locale\unicode.cpp - Remove PAL_BindResources API
misc\sysinfo.cpp - Hard-code process max address space to 128 TB
safecrt\vsprintf.c - add implementation for vsprintf_s
synchmgr\synchmanager.cpp - remove non-coreclr code
thread\process.cpp - Remove support for running PE files in CreateProcess

**4. Add amd64_save_registers.S**
Added amd64_save_registers.S which reimplements amd64_save_registers.asm for the clang assembler.
The primary differences here are the following:
- Linux assembler syntax
- The Linux x64 ABI has the first parameter passed in in $rdi so this implementation uses that

**5. Added macro FLAG_STRING to ConfigFlagsList.**
Added a new FLAG_STRING macro. These are specifically for ConfigFlags with string default values, so that we can use the correct string type depending on the platform.

**6. Build script for Unix platforms**
Added a simple shell script to build on non-Windows platform.
./build.sh builds retail by default
./build.sh -d builds debug

**7. Convert strings to use CH_WSTR**
This change converts some more code to use 2-byte Unicode strings on Linux.
The subset of files in this commit are the files needed to get GCStress to build.
Hitesh Kanwathirtha 10 years ago
parent
commit
553c4f368f
100 changed files with 2189 additions and 843 deletions
  1. 28 7
      CMakeLists.txt
  2. 1 0
      bin/CMakeLists.txt
  3. 1 1
      bin/ChakraCore/ConfigParserExternals.cpp
  4. 26 0
      bin/GCStress/CMakeLists.txt
  5. 26 12
      bin/GCStress/GCStress.cpp
  6. 10 10
      bin/GCStress/RecyclerTestObject.h
  7. 8 8
      bin/GCStress/StubExternalApi.cpp
  8. 11 2
      bin/GCStress/stdafx.h
  9. 31 0
      build.sh
  10. 1 1
      lib/CMakeLists.txt
  11. 12 8
      lib/Common/Common.h
  12. 1 1
      lib/Common/Common/CMakeLists.txt
  13. 2 0
      lib/Common/Common/CommonCommonPch.h
  14. 27 27
      lib/Common/Common/DateUtilities.cpp
  15. 8 1
      lib/Common/Common/Jobs.cpp
  16. 1 1
      lib/Common/Common/NumberUtilities.cpp
  17. 65 2
      lib/Common/CommonDefines.h
  18. 21 0
      lib/Common/CommonMin.h
  19. 44 25
      lib/Common/CommonPal.h
  20. 12 7
      lib/Common/ConfigFlagsList.h
  21. 1 1
      lib/Common/Core/Api.h
  22. 8 2
      lib/Common/Core/Assertions.h
  23. 1 1
      lib/Common/Core/CMakeLists.txt
  24. 2 1
      lib/Common/Core/CommonCorePch.h
  25. 2 0
      lib/Common/Core/CommonMinMax.h
  26. 15 8
      lib/Common/Core/ConfigFlagsTable.cpp
  27. 11 10
      lib/Common/Core/ConfigFlagsTable.h
  28. 15 6
      lib/Common/Core/ConfigParser.cpp
  29. 13 5
      lib/Common/Core/Output.cpp
  30. 10 4
      lib/Common/Core/Output.h
  31. 2 2
      lib/Common/Core/ProfileInstrument.h
  32. 6 1
      lib/Common/Core/ProfileMemory.cpp
  33. 42 6
      lib/Common/Core/SysInfo.cpp
  34. 10 15
      lib/Common/Core/SysInfo.h
  35. 3 3
      lib/Common/DataStructures/BufferBuilder.cpp
  36. 1 1
      lib/Common/DataStructures/CMakeLists.txt
  37. 4 2
      lib/Common/DataStructures/CommonDataStructuresPch.h
  38. 2 2
      lib/Common/DataStructures/DList.h
  39. 2 2
      lib/Common/DataStructures/FixedBitVector.cpp
  40. 0 1
      lib/Common/DataStructures/ImmutableList.cpp
  41. 17 17
      lib/Common/DataStructures/List.h
  42. 4 1
      lib/Common/DataStructures/PageStack.h
  43. 0 74
      lib/Common/DataStructures/SparseBitVector.cpp
  44. 2 77
      lib/Common/DataStructures/SparseBitVector.h
  45. 5 5
      lib/Common/DataStructures/UnitBitVector.h
  46. 2 2
      lib/Common/DataStructures/WeakReferenceDictionary.h
  47. 1 1
      lib/Common/Exceptions/CMakeLists.txt
  48. 4 1
      lib/Common/Exceptions/ReportError.cpp
  49. 10 4
      lib/Common/Exceptions/Throw.cpp
  50. 2 1
      lib/Common/Exceptions/Throw.h
  51. 9 0
      lib/Common/Memory/ArenaAllocator.cpp
  52. 2 1
      lib/Common/Memory/CMakeLists.txt
  53. 4 0
      lib/Common/Memory/CommonMemoryPch.h
  54. 33 33
      lib/Common/Memory/CustomHeap.cpp
  55. 5 3
      lib/Common/Memory/HeapAllocator.cpp
  56. 13 12
      lib/Common/Memory/HeapBlock.cpp
  57. 19 17
      lib/Common/Memory/HeapBlock.h
  58. 4 1
      lib/Common/Memory/HeapBlockMap.cpp
  59. 19 17
      lib/Common/Memory/HeapBucket.cpp
  60. 3 0
      lib/Common/Memory/HeapInfo.cpp
  61. 10 2
      lib/Common/Memory/IdleDecommitPageAllocator.cpp
  62. 4 1
      lib/Common/Memory/IdleDecommitPageAllocator.h
  63. 8 8
      lib/Common/Memory/LargeHeapBlock.cpp
  64. 6 6
      lib/Common/Memory/LargeHeapBucket.cpp
  65. 29 10
      lib/Common/Memory/LeakReport.cpp
  66. 4 2
      lib/Common/Memory/LeakReport.h
  67. 4 4
      lib/Common/Memory/MarkContext.inl
  68. 82 54
      lib/Common/Memory/PageAllocator.cpp
  69. 44 13
      lib/Common/Memory/PageAllocator.h
  70. 6 1
      lib/Common/Memory/PagePool.h
  71. 220 203
      lib/Common/Memory/Recycler.cpp
  72. 15 2
      lib/Common/Memory/Recycler.h
  73. 6 2
      lib/Common/Memory/Recycler.inl
  74. 2 2
      lib/Common/Memory/RecyclerObjectDumper.cpp
  75. 6 6
      lib/Common/Memory/RecyclerObjectGraphDumper.cpp
  76. 1 1
      lib/Common/Memory/RecyclerObjectGraphDumper.h
  77. 5 1
      lib/Common/Memory/RecyclerPageAllocator.cpp
  78. 3 0
      lib/Common/Memory/RecyclerPageAllocator.h
  79. 12 0
      lib/Common/Memory/RecyclerPointers.h
  80. 5 0
      lib/Common/Memory/RecyclerWeakReference.h
  81. 4 4
      lib/Common/Memory/RecyclerWriteBarrierManager.cpp
  82. 2 0
      lib/Common/Memory/SmallBlockDeclarations.inl
  83. 4 4
      lib/Common/Memory/SmallFinalizableHeapBlock.cpp
  84. 4 4
      lib/Common/Memory/SmallFinalizableHeapBucket.cpp
  85. 9 9
      lib/Common/Memory/SmallHeapBlockAllocator.cpp
  86. 2 2
      lib/Common/Memory/SmallHeapBlockAllocator.h
  87. 1 1
      lib/Common/Memory/SmallLeafHeapBlock.cpp
  88. 3 3
      lib/Common/Memory/SmallNormalHeapBlock.cpp
  89. 2 0
      lib/Common/Memory/StressTest.cpp
  90. 9 9
      lib/Common/Memory/VirtualAllocWrapper.cpp
  91. 36 0
      lib/Common/Memory/amd64/amd64_SAVE_REGISTERS.S
  92. 1 1
      lib/Common/Util/CMakeLists.txt
  93. 1 1
      lib/Parser/Parse.h
  94. 42 16
      pal/inc/pal.h
  95. 2 0
      pal/inc/pal_mstypes.h
  96. 4 14
      pal/inc/rt/palrt.h
  97. 41 0
      pal/inc/unixasmmacros.inc
  98. 340 0
      pal/inc/unixasmmacrosamd64.inc
  99. 474 0
      pal/inc/volatile.h
  100. 92 4
      pal/src/CMakeLists.txt

+ 28 - 7
CMakeLists.txt

@@ -59,20 +59,26 @@ if(CLR_CMAKE_PLATFORM_UNIX)
         add_definitions(-D_M_X64 -D_M_AMD64)
     endif(CMAKE_SYSTEM_PROCESSOR STREQUAL x86_64 OR CMAKE_SYSTEM_PROCESSOR STREQUAL amd64)
 
-    add_definitions("-fms-extensions")
-
-    # Disable some warnings
-    add_definitions("-Wno-tautological-constant-out-of-range-compare")
     add_definitions("-D__STDC_WANT_LIB_EXT1__=1")
-    add_definitions("-std=c++11")
-    add_definitions("-stdlib=libc++")
     add_definitions(
-        -DPAL_STDCPP_COMPAT=1
         -DUNICODE
         -D_SAFECRT_USE_CPP_OVERLOADS=1
         )
 
+    # xplat-todo: enable the JIT for Linux
+    add_definitions(
+        -DDISABLE_JIT=1
+        )
+      
+    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fms-extensions")
+
+    # Disable some warnings
+    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-tautological-constant-out-of-range-compare")
+    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
+
     # xplat-todo: revisit these
+    # also, we should change this from add_definitions- the intent of that
+    # is to just add -D flags
     add_definitions(
         -fdelayed-template-parsing
         -Wno-microsoft
@@ -90,9 +96,20 @@ if(CLR_CMAKE_PLATFORM_UNIX)
         -Wno-null-arithmetic
         -Wno-tautological-undefined-compare
         -Wno-address-of-temporary  # vtinfo.h, VirtualTableInfo<T>::RegisterVirtualTable
+        -Wno-null-conversion # Check shmemory.cpp and cs.cpp here...
     )
 endif(CLR_CMAKE_PLATFORM_UNIX)
 
+if(CMAKE_BUILD_TYPE STREQUAL Debug)
+    add_definitions(
+        -DDBG=1
+        -DDEBUG=1
+        -DDBG_DUMP=1        
+    )
+    # xplat-todo: reenable this warning
+    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-writable-strings")
+endif(CMAKE_BUILD_TYPE STREQUAL Debug)
+
 if(IS_64BIT_BUILD)
     add_definitions(
         -DBIT64=1
@@ -104,8 +121,10 @@ if(CLR_CMAKE_PLATFORM_UNIX)
    add_definitions(-DFEATURE_PAL)
 endif(CLR_CMAKE_PLATFORM_UNIX)
 
+enable_language(ASM)
 
 include_directories(
+    .
     lib/Common
     lib/Common/PlaceHolder
     pal
@@ -113,3 +132,5 @@ include_directories(
     pal/inc/rt
     )
 add_subdirectory (lib)
+add_subdirectory (bin)
+add_subdirectory (pal)

+ 1 - 0
bin/CMakeLists.txt

@@ -0,0 +1 @@
+add_subdirectory (GCStress)

+ 1 - 1
bin/ChakraCore/ConfigParserExternals.cpp

@@ -14,7 +14,7 @@ void ConfigParserAPI::DisplayInitialOutput(__in LPWSTR moduleName)
 {
 }
 
-LPWSTR JsUtil::ExternalApi::GetFeatureKeyName()
+LPCWSTR JsUtil::ExternalApi::GetFeatureKeyName()
 {
     return L"";
 }

+ 26 - 0
bin/GCStress/CMakeLists.txt

@@ -0,0 +1,26 @@
+add_executable (GCStress
+  GCStress.cpp
+  RecyclerTestObject.cpp
+  stdafx.cpp
+  StubExternalApi.cpp
+    )
+
+include_directories(..)
+
+target_include_directories (GCStress  
+  PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}
+  $<BUILD_INTERFACE:${ROOT_SOURCE_DIR}/lib/Common>
+  $<BUILD_INTERFACE:${ROOT_SOURCE_DIR}/lib/Common/Memory>  
+  )
+
+set(CMAKE_EXE_LINKER_FLAGS "-static-libgcc -static-libstdc++ -lsodium")
+
+target_link_libraries (GCStress
+  PRIVATE Chakra.Common.Memory
+  PRIVATE Chakra.Common.Common  
+  PRIVATE Chakra.Common.Core
+  PRIVATE Chakra.Common.DataStructures
+  PRIVATE Chakra.Common.Exceptions
+  PRIVATE Chakra.Pal
+  )
+

+ 26 - 12
bin/GCStress/GCStress.cpp

@@ -9,7 +9,7 @@ void DoVerify(bool value, const char * expr, const char * file, int line)
 {
     if (!value)
     {
-        wprintf(L"==== FAILURE: '%S' evaluated to false. %S(%d)\n", expr, file, line);
+        wprintf(CH_WSTR("==== FAILURE: '%S' evaluated to false. %S(%d)\n"), expr, file, line);
         DebugBreak();
     }
 }
@@ -231,12 +231,18 @@ void SimpleRecyclerTest()
     BuildOperationTable();
 
     // Construct Recycler instance and use it
+#if ENABLE_BACKGROUND_PAGE_FREEING
     PageAllocator::BackgroundPageQueue backgroundPageQueue;
+#endif
     IdleDecommitPageAllocator pageAllocator(nullptr, 
         PageAllocatorType::PageAllocatorType_Thread,
         Js::Configuration::Global.flags,
         0 /* maxFreePageCount */, PageAllocator::DefaultMaxFreePageCount /* maxIdleFreePageCount */,
-        false /* zero pages */, &backgroundPageQueue);
+        false /* zero pages */
+#if ENABLE_BACKGROUND_PAGE_FREEING
+        , &backgroundPageQueue
+#endif
+        );
 
     try
     {
@@ -258,7 +264,7 @@ void SimpleRecyclerTest()
         }
 #endif
 
-        wprintf(L"Recycler created, initializing heap...\n");
+        wprintf(CH_WSTR("Recycler created, initializing heap...\n"));
         
         // Initialize stack roots and add to our roots table        
         RecyclerTestObject * stackRoots[stackRootCount];
@@ -292,7 +298,7 @@ void SimpleRecyclerTest()
             InsertObject();
         }
 
-        wprintf(L"Initialization complete\n");
+        wprintf(CH_WSTR("Initialization complete\n"));
 
         // Do an initial walk
         WalkHeap();
@@ -316,7 +322,7 @@ void SimpleRecyclerTest()
         printf("Error: OOM\n");
     }
 
-    wprintf(L"==== Test completed.\n");
+    wprintf(CH_WSTR("==== Test completed.\n"));
 }
 
 //////////////////// End test implementations ////////////////////
@@ -341,8 +347,8 @@ bool GetDeviceFamilyInfo(
 void usage(const WCHAR* self)
 {
     wprintf(
-        L"usage: %s [-?|-v] [-js <jscript options from here on>]\n"
-        L"  -v\n\tverbose logging\n",
+        CH_WSTR("usage: %s [-?|-v] [-js <jscript options from here on>]\n")
+        CH_WSTR("  -v\n\tverbose logging\n"),
         self);
 }
 
@@ -354,30 +360,30 @@ int __cdecl wmain(int argc, __in_ecount(argc) WCHAR* argv[])
     {
         if (argv[i][0] == '-')
         {
-            if (wcscmp(argv[i], L"-?") == 0)
+            if (wcscmp(argv[i], CH_WSTR("-?")) == 0)
             {
                 usage(argv[0]);
                 exit(1);
             }
-            else if (wcscmp(argv[i], L"-v") == 0)
+            else if (wcscmp(argv[i], CH_WSTR("-v")) == 0)
             {
                 verbose = true;
             }
-            else if (wcscmp(argv[i], L"-js") == 0 || wcscmp(argv[i], L"-JS") == 0)
+            else if (wcscmp(argv[i], CH_WSTR("-js")) == 0 || wcscmp(argv[i], CH_WSTR("-JS")) == 0)
             {
                 jscriptOptions = i;
                 break;
             }
             else 
             {
-                wprintf(L"unknown argument '%s'\n", argv[i]);
+                wprintf(CH_WSTR("unknown argument '%s'\n"), argv[i]);
                 usage(argv[0]);
                 exit(1);
             }
         }
         else
         {
-            wprintf(L"unknown argument '%s'\n", argv[i]);
+            wprintf(CH_WSTR("unknown argument '%s'\n"), argv[i]);
             usage(argv[0]);
             exit(1);
         }
@@ -396,4 +402,12 @@ int __cdecl wmain(int argc, __in_ecount(argc) WCHAR* argv[])
     return 0;
 }
 
+#ifndef _WIN32
+int main(int argc, char** argv)
+{
+    int ret = wmain(0, NULL);
+    PAL_Shutdown();
+    return ret;
+}
+#endif
 //////////////////// End program entrypoint ////////////////////

+ 10 - 10
bin/GCStress/RecyclerTestObject.h

@@ -32,8 +32,8 @@ public:
 
         currentWalkDepth = 0;
 
-        wprintf(L"-------------------------------------------\n");
-        wprintf(L"Full heap walk starting\n");
+        wprintf(CH_WSTR("-------------------------------------------\n"));
+        wprintf(CH_WSTR("Full heap walk starting\n"));
     }
     
     static void WalkReference(RecyclerTestObject * object)
@@ -65,14 +65,14 @@ public:
     {
         VerifyCondition(currentWalkDepth == 0);
 
-        wprintf(L"Full heap walk finished\n");
-        wprintf(L"Object Count:   %12llu\n", (unsigned long long) walkObjectCount);
-        wprintf(L"Scanned Bytes:  %12llu\n", (unsigned long long) walkScannedByteCount);
-        wprintf(L"Barrier Bytes:  %12llu\n", (unsigned long long) walkBarrierByteCount);
-        wprintf(L"Tracked Bytes:  %12llu\n", (unsigned long long) walkTrackedByteCount);
-        wprintf(L"Leaf Bytes:     %12llu\n", (unsigned long long) walkLeafByteCount);
-        wprintf(L"Total Bytes:    %12llu\n", (unsigned long long) (walkScannedByteCount + walkBarrierByteCount + walkTrackedByteCount + walkLeafByteCount));
-        wprintf(L"Max Depth:      %12llu\n", (unsigned long long) maxWalkDepth);
+        wprintf(CH_WSTR("Full heap walk finished\n"));
+        wprintf(CH_WSTR("Object Count:   %12llu\n"), (unsigned long long) walkObjectCount);
+        wprintf(CH_WSTR("Scanned Bytes:  %12llu\n"), (unsigned long long) walkScannedByteCount);
+        wprintf(CH_WSTR("Barrier Bytes:  %12llu\n"), (unsigned long long) walkBarrierByteCount);
+        wprintf(CH_WSTR("Tracked Bytes:  %12llu\n"), (unsigned long long) walkTrackedByteCount);
+        wprintf(CH_WSTR("Leaf Bytes:     %12llu\n"), (unsigned long long) walkLeafByteCount);
+        wprintf(CH_WSTR("Total Bytes:    %12llu\n"), (unsigned long long) (walkScannedByteCount + walkBarrierByteCount + walkTrackedByteCount + walkLeafByteCount));
+        wprintf(CH_WSTR("Max Depth:      %12llu\n"), (unsigned long long) maxWalkDepth);
     }
 
     // Virtual methods

+ 8 - 8
bin/GCStress/StubExternalApi.cpp

@@ -23,7 +23,7 @@ __forceinline void js_wmemcpy_s(__ecount(sizeInWords) wchar_t *dst, size_t sizeI
     Assert(count <= sizeInWords && count <= SIZE_MAX/sizeof(wchar_t));
     if(!(count <= sizeInWords && count <= SIZE_MAX/sizeof(wchar_t)))
     {
-        ReportFatalException(NULL, E_FAIL, Fatal_Internal_Error, 2);
+        ReportFatalException((ULONG_PTR) NULL, E_FAIL, Fatal_Internal_Error, 2);
     }
     else
     {
@@ -33,16 +33,16 @@ __forceinline void js_wmemcpy_s(__ecount(sizeInWords) wchar_t *dst, size_t sizeI
 
 bool ConfigParserAPI::FillConsoleTitle(__ecount(cchBufferSize) LPWSTR buffer, size_t cchBufferSize, __in LPWSTR moduleName)
 {
-    swprintf_s(buffer, cchBufferSize, L"Chakra GC: %d - %s", GetCurrentProcessId(), moduleName);
+    swprintf_s(buffer, cchBufferSize, CH_WSTR("Chakra GC: %d - %s"), GetCurrentProcessId(), moduleName);
 
     return true;
 }
 
 void ConfigParserAPI::DisplayInitialOutput(__in LPWSTR moduleName)
 {
-    Output::Print(L"Chakra GC\n");
-    Output::Print(L"INIT: PID        : %d\n", GetCurrentProcessId());
-    Output::Print(L"INIT: DLL Path   : %s\n", moduleName);
+    Output::Print(CH_WSTR("Chakra GC\n"));
+    Output::Print(CH_WSTR("INIT: PID        : %d\n"), GetCurrentProcessId());
+    Output::Print(CH_WSTR("INIT: DLL Path   : %s\n"), moduleName);
 }
 
 #ifdef ENABLE_JS_ETW
@@ -77,9 +77,9 @@ bool JsUtil::ExternalApi::RaiseOnIntOverflow()
     return false;
 }
 
-LPWSTR JsUtil::ExternalApi::GetFeatureKeyName()
+LPCWSTR JsUtil::ExternalApi::GetFeatureKeyName()
 {
-    return  L"Software\\Microsoft\\Internet Explorer\\ChakraRecycler";
+    return  CH_WSTR("Software\\Microsoft\\Internet Explorer\\ChakraRecycler");
 }
 
 #if DBG || defined(EXCEPTION_CHECK)
@@ -142,4 +142,4 @@ namespace Js
     void GCTelemetry::LogGCPauseStartTime() {};
     void GCTelemetry::LogGCPauseEndTime() {};
 };
-#endif
+#endif

+ 11 - 2
bin/GCStress/stdafx.h

@@ -6,15 +6,24 @@
 
 #include "TargetVer.h"
 
+#ifdef _WIN32
 #include <windows.h>
 #include <winbase.h>
 #include <oleauto.h>
+#else
+#include <CommonPal.h>
+#endif
+
+#ifdef _MSC_VER
 #pragma warning(disable:4985)
 #include <intrin.h>
-#include <wtypes.h>
+#endif
 
+#ifndef USING_PAL_STDLIB
+#include <wtypes.h>
 #include <stdio.h>
-#include <tchar.h>
+#include <cstdlib>
+#endif
 
 // This is an intentionally lame name because we can't use Assert or Verify etc
 

+ 31 - 0
build.sh

@@ -0,0 +1,31 @@
+#-------------------------------------------------------------------------------------------------------
+# Copyright (C) Microsoft. All rights reserved.
+# Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
+#-------------------------------------------------------------------------------------------------------
+
+if [ ! -d "BuildLinux" ]; then
+    mkdir BuildLinux;
+fi
+
+pushd BuildLinux > /dev/null
+
+DEBUG_BUILD=0
+while getopts ":d" opt; do
+    case $opt in
+        d)
+        DEBUG_BUILD=1
+        ;;
+    esac
+done
+
+if [ $DEBUG_BUILD -eq 1 ]; then
+    echo Generating Debug makefiles
+    cmake -DCMAKE_BUILD_TYPE=Debug ..
+else
+    echo Generating Retail makefiles
+    echo Building Retail;
+    cmake -DCMAKE_BUILD_TYPE=Release ..
+fi
+
+make |& tee build.log
+popd > /dev/null

+ 1 - 1
lib/CMakeLists.txt

@@ -1,2 +1,2 @@
 add_subdirectory (Common)
-add_subdirectory (Parser)
+# add_subdirectory (Parser)

+ 12 - 8
lib/Common/Common.h

@@ -6,7 +6,16 @@
 
 #include "CommonMinMemory.h"
 
+#ifdef _WIN32
+typedef _Return_type_success_(return >= 0) LONG NTSTATUS;
+#define NT_SUCCESS(Status) (((NTSTATUS)(Status)) >= 0)
+#endif
+
+// If we're using the PAL for C++ standard library compat,
+// we don't need to include wchar for string handling
+#ifndef USING_PAL_STDLIB
 // === C Runtime Header Files ===
+#include <wchar.h>
 #include <stdarg.h>
 #include <float.h>
 #include <limits.h>
@@ -16,16 +25,10 @@
 #include <math.h>
 #endif
 #include <time.h>
-
-#ifdef _WIN32
-typedef _Return_type_success_(return >= 0) LONG NTSTATUS;
-#define NT_SUCCESS(Status) (((NTSTATUS)(Status)) >= 0)
-
-#include <wchar.h>
 #include <io.h>
+#include <malloc.h>
 #endif
 
-#include <malloc.h>
 extern "C" void * _AddressOfReturnAddress(void);
 
 #include "Common/GetCurrentFrameId.h"
@@ -138,9 +141,10 @@ class AutoExpDummyClass
 {
 };
 
+#ifdef _MSC_VER
 #pragma warning(push)
 #if defined(PROFILE_RECYCLER_ALLOC) || defined(HEAP_TRACK_ALLOC) || defined(ENABLE_DEBUG_CONFIG_OPTIONS)
 #include <typeinfo.h>
 #endif
 #pragma warning(pop)
-
+#endif

+ 1 - 1
lib/Common/Common/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.Common
+add_library (Chakra.Common.Common STATIC
     Api.cpp
     CfgLogger.cpp
     CommonCommonPch.cpp

+ 2 - 0
lib/Common/Common/CommonCommonPch.h

@@ -20,11 +20,13 @@
 #include "Common/NumberUtilitiesBase.h"
 #include "Common/NumberUtilities.h"
 
+#ifdef _MSC_VER
 #pragma warning(push)
 #if defined(PROFILE_RECYCLER_ALLOC) || defined(HEAP_TRACK_ALLOC) || defined(ENABLE_DEBUG_CONFIG_OPTIONS)
 #include <typeinfo.h>
 #endif
 #pragma warning(pop)
+#endif
 
 
 

+ 27 - 27
lib/Common/Common/DateUtilities.cpp

@@ -32,41 +32,41 @@ namespace Js
 
     const wchar_t g_rgpszDay[7][4] =
     {
-        L"Sun",
-        L"Mon",
-        L"Tue",
-        L"Wed",
-        L"Thu",
-        L"Fri",
-        L"Sat"
+        CH_WSTR("Sun"),
+        CH_WSTR("Mon"),
+        CH_WSTR("Tue"),
+        CH_WSTR("Wed"),
+        CH_WSTR("Thu"),
+        CH_WSTR("Fri"),
+        CH_WSTR("Sat")
     };
 
     const wchar_t g_rgpszMonth[12][4] =
     {
-        L"Jan",
-        L"Feb",
-        L"Mar",
-        L"Apr",
-        L"May",
-        L"Jun",
-        L"Jul",
-        L"Aug",
-        L"Sep",
-        L"Oct",
-        L"Nov",
-        L"Dec"
+        CH_WSTR("Jan"),
+        CH_WSTR("Feb"),
+        CH_WSTR("Mar"),
+        CH_WSTR("Apr"),
+        CH_WSTR("May"),
+        CH_WSTR("Jun"),
+        CH_WSTR("Jul"),
+        CH_WSTR("Aug"),
+        CH_WSTR("Sep"),
+        CH_WSTR("Oct"),
+        CH_WSTR("Nov"),
+        CH_WSTR("Dec")
     };
 
     const wchar_t g_rgpszZone[8][4] =
     {
-        L"EST",
-        L"EDT",
-        L"CST",
-        L"CDT",
-        L"MST",
-        L"MDT",
-        L"PST",
-        L"PDT"
+        CH_WSTR("EST"),
+        CH_WSTR("EDT"),
+        CH_WSTR("CST"),
+        CH_WSTR("CDT"),
+        CH_WSTR("MST"),
+        CH_WSTR("MDT"),
+        CH_WSTR("PST"),
+        CH_WSTR("PDT")
     };
 
     //

+ 8 - 1
lib/Common/Common/Jobs.cpp

@@ -25,6 +25,7 @@
 #include "Common/ThreadService.h"
 #include "Common/Jobs.h"
 #include "Common/Jobs.inl"
+#include "Core/CommonMinMax.h"
 
 namespace JsUtil
 {
@@ -464,6 +465,9 @@ namespace JsUtil
         // Do nothing
     }
 
+// Xplat-todo: revive BackgroundJobProcessor- we need this for the JIT
+#if ENABLE_BACKGROUND_JOB_PROCESSOR
+
     // -------------------------------------------------------------------------------------------------------------------------
     // BackgroundJobProcessor
     // -------------------------------------------------------------------------------------------------------------------------
@@ -483,6 +487,7 @@ namespace JsUtil
         {
             int processorCount = AutoSystemInfo::Data.GetNumberOfPhysicalProcessors();
             //There is 2 threads already in play, one UI (main) thread and a GC thread. So subtract 2 from processorCount to account for the same.
+
             this->maxThreadCount = max(1, min(processorCount - 2, CONFIG_FLAG(MaxJitThreadCount)));
         }
     }
@@ -1257,9 +1262,10 @@ namespace JsUtil
         }
     }
 
+// xplat-todo: this entire function probably needs to be ifdefed out
     int BackgroundJobProcessor::ExceptFilter(LPEXCEPTION_POINTERS pEP)
     {
-#if DBG
+#if DBG && defined(_WIN32)
         // Assert exception code
         if (pEP->ExceptionRecord->ExceptionCode == STATUS_ASSERTION_FAILURE)
         {
@@ -1399,4 +1405,5 @@ namespace JsUtil
         L"BackgroundJobProcessor thread 15",
         L"BackgroundJobProcessor thread 16" };
 #endif
+#endif // ENABLE_BACKGROUND_JOB_PROCESSOR
 }

+ 1 - 1
lib/Common/Common/NumberUtilities.cpp

@@ -254,7 +254,7 @@ namespace Js
             return false;
         }
         uint32 val = (uint32)(str[0] - L'0');
-        int calcLen = min(length, 9);
+        int calcLen = (length < 9 ? length : 9);
         for (int i = 1; i < calcLen; i++)
         {
             if ((str[i] < L'0')|| (str[i] > L'9'))

+ 65 - 2
lib/Common/CommonDefines.h

@@ -87,6 +87,17 @@
 // NOTE: Disabling these might not work and are not fully supported and maintained
 // Even if it builds, it may not work properly. Disable at your own risk
 
+// Config options
+#ifdef _WIN32
+#define CONFIG_CONSOLE_AVAILABLE 1
+#define CONFIG_PARSE_CONFIG_FILE 1
+#define CONFIG_RICH_TRACE_FORMAT 1
+#else
+#define CONFIG_CONSOLE_AVAILABLE 0
+#define CONFIG_PARSE_CONFIG_FILE 0
+#define CONFIG_RICH_TRACE_FORMAT 0
+#endif
+
 // ByteCode
 #define VARIABLE_INT_ENCODING 1                     // Byte code serialization variable size int field encoding
 #define BYTECODE_BRANCH_ISLAND                      // Byte code short branch and branch island
@@ -120,11 +131,23 @@
 // These are disabled because these GC features depend on hardware
 // write-watch support that the Windows Memory Manager provides.
 #ifdef _WIN32
+#define SYSINFO_IMAGE_BASE_AVAILABLE 1
 #define ENABLE_CONCURRENT_GC 1
 #define ENABLE_PARTIAL_GC 1
+#define ENABLE_BACKGROUND_PAGE_ZEROING 1
+#define ENABLE_BACKGROUND_PAGE_FREEING 1
+#define ENABLE_RECYCLER_TYPE_TRACKING 1
 #else
+#define SYSINFO_IMAGE_BASE_AVAILABLE 0
 #define ENABLE_CONCURRENT_GC 0
 #define ENABLE_PARTIAL_GC 0
+#define ENABLE_BACKGROUND_PAGE_ZEROING 0
+#define ENABLE_BACKGROUND_PAGE_FREEING 0
+#define ENABLE_RECYCLER_TYPE_TRACKING 0
+#endif
+
+#if ENABLE_BACKGROUND_PAGE_ZEROING && !ENABLE_BACKGROUND_PAGE_FREEING
+#error "Background page zeroing can't be turned on if freeing pages in the background is disabled"
 #endif
 
 #define BUCKETIZE_MEDIUM_ALLOCATIONS 1              // *** TODO: Won't build if disabled currently
@@ -139,6 +162,7 @@
 #if DISABLE_JIT
 #define ENABLE_NATIVE_CODEGEN 0
 #define ENABLE_PROFILE_INFO 0
+#define ENABLE_BACKGROUND_JOB_PROCESSOR 0
 #define ENABLE_BACKGROUND_PARSING 0                 // Disable background parsing in this mode
                                                     // We need to decouple the Jobs infrastructure out of
                                                     // Backend to make background parsing work with JIT disabled
@@ -152,6 +176,8 @@
 // By default, enable the JIT
 #define ENABLE_NATIVE_CODEGEN 1
 #define ENABLE_PROFILE_INFO 1
+
+#define ENABLE_BACKGROUND_JOB_PROCESSOR 1
 #define ENABLE_BACKGROUND_PARSING 1
 #define ENABLE_COPYONACCESS_ARRAY 1
 #ifndef DYNAMIC_INTERPRETER_THUNK
@@ -246,7 +272,11 @@
 #endif
 #define RUNTIME_DATA_COLLECTION
 #define SECURITY_TESTING
+
+// xplat-todo: Temporarily disable profile output on non-Win32 builds
+#ifdef _WIN32
 #define PROFILE_EXEC
+#endif
 
 #define BGJIT_STATS
 #define REJIT_STATS
@@ -297,7 +327,12 @@
 //----------------------------------------------------------------------------------------------------
 #ifdef DEBUG
 #define BYTECODE_TESTING
+
+// xplat-todo: revive FaultInjection on non-Win32 platforms
+// currently depends on io.h
+#ifdef _WIN32
 #define FAULT_INJECTION
+#endif
 #define RECYCLER_NO_PAGE_REUSE
 #ifdef NTBUILD
 #define INTERNAL_MEM_PROTECT_HEAP_ALLOC
@@ -307,8 +342,12 @@
 
 #ifdef DBG
 #define VALIDATE_ARRAY
+
+// xplat-todo: Do we need dump generation for non-Win32 platforms?
+#ifdef _WIN32
 #define GENERATE_DUMP
 #endif
+#endif
 
 #if DBG_DUMP
 #undef DBG_EXTRAFIELD   // make sure we don't extra fields in free build.
@@ -322,16 +361,28 @@
 #define MISSING_PROPERTY_STATS
 #define EXCEPTION_RECOVERY 1
 #define EXCEPTION_CHECK                     // Check exception handling.
+#ifdef _WIN32
 #define PROFILE_EXEC
+#endif
 #define PROFILE_MEM
 #define PROFILE_TYPES
 #define PROFILE_EVALMAP
 #define PROFILE_OBJECT_LITERALS
 #define PROFILE_BAILOUT_RECORD_MEMORY
 #define MEMSPECT_TRACKING
+
+// xplat-todo: Depends on C++ type-info
+// enable later on non-VC++ compilers
+
+#ifdef _WIN32
 #define PROFILE_RECYCLER_ALLOC
-#define PROFILE_STRINGS
+// Needs to compile in debug mode
+// Just needs strings converted
 #define PROFILE_DICTIONARY 1
+#endif
+
+#define PROFILE_STRINGS
+
 #define RECYCLER_SLOW_CHECK_ENABLED          // This can be disabled to speed up the debug build's GC
 #define RECYCLER_STRESS
 #define RECYCLER_STATS
@@ -351,7 +402,13 @@
 #define PAGEALLOCATOR_PROTECT_FREEPAGE
 #define ARENA_MEMORY_VERIFY
 #define SEPARATE_ARENA
+
+// xplat-todo: This depends on C++ type-tracking
+// Need to re-enable on non-VC++ compilers
+#ifdef _WIN32
 #define HEAP_TRACK_ALLOC
+#endif
+
 #define CHECK_MEMORY_LEAK
 #define LEAK_REPORT
 
@@ -455,7 +512,6 @@
 #if _M_IX86
 #define I386_ASM 1
 #endif //_M_IX86
-#endif // _WIN32 || _WIN64
 
 #ifndef PDATA_ENABLED
 #if defined(_M_ARM32_OR_ARM64) || defined(_M_X64)
@@ -464,6 +520,7 @@
 #define PDATA_ENABLED 0
 #endif
 #endif
+#endif // _WIN32 || _WIN64
 
 #ifndef _WIN32
 #define DISABLE_SEH 1
@@ -506,7 +563,9 @@
 // HEAP_TRACK_ALLOC and RECYCLER_STATS
 #if defined(LEAK_REPORT) || defined(CHECK_MEMORY_LEAK)
 #define RECYCLER_DUMP_OBJECT_GRAPH
+#ifdef _WIN32
 #define HEAP_TRACK_ALLOC
+#endif
 #define RECYCLER_STATS
 #endif
 
@@ -522,6 +581,10 @@
 
 
 #if defined(HEAP_TRACK_ALLOC) || defined(PROFILE_RECYCLER_ALLOC)
+#ifndef _WIN32
+#error "Not yet supported on non-VC++ compiler"
+#endif
+
 #define TRACK_ALLOC
 #define TRACE_OBJECT_LIFETIME           // track a particular object's lifetime
 #endif

+ 21 - 0
lib/Common/CommonMin.h

@@ -7,6 +7,7 @@
 #include "CommonBasic.h"
 
 // === C Runtime Header Files ===
+#ifndef USING_PAL_STDLIB
 #pragma warning(push)
 #pragma warning(disable: 4995) /* 'function': name was marked as #pragma deprecated */
 #include <stdio.h>
@@ -14,10 +15,30 @@
 #ifdef _WIN32
 #include <intrin.h>
 #endif
+#endif
 
 // === Core Header Files ===
+// In Debug mode, the PALs definition of max and min are insufficient
+// since some of our code expects the template min-max instead, so
+// including that here
+#if defined(DBG) && !defined(_MSC_VER)
+#pragma push_macro("NO_PAL_MINMAX")
+#pragma push_macro("_Post_equal_to")
+#pragma push_macro("_Post_satisfies_")
+#define NO_PAL_MINMAX
+#define _Post_equal_to_(x)
+#define _Post_satisfies_(x)
+#endif
+
 #include "Core/CommonMinMax.h"
 
+// Restore the macros
+#if defined(DBG) && !defined(_MSC_VER)
+#pragma pop_macro("NO_PAL_MINMAX")
+#pragma pop_macro("_Post_equal_to")
+#pragma pop_macro("_Post_satisfies_")
+#endif
+
 #include "EnumHelp.h"
 #include "Core/Assertions.h"
 #include "Core/SysInfo.h"

+ 44 - 25
lib/Common/CommonPal.h

@@ -24,19 +24,22 @@ typedef wchar_t wchar16;
 
 // xplat-todo: get a better name for this macro
 #define CH_WSTR(s) L##s
+#define INIT_PRIORITY(x)
 
 #define get_cpuid __cpuid
 
 #else // !_WIN32
 
-#include "pal.h"
+#define USING_PAL_STDLIB 1
+
+#include "inc/pal.h"
 #include "inc/rt/palrt.h"
 #include "inc/rt/no_sal2.h"
 #include "inc/rt/oaidl.h"
-#include <emmintrin.h>
 
 typedef char16_t wchar16;
 #define CH_WSTR(s) u##s
+#define INIT_PRIORITY(x) __attribute__((init_priority(x)))
 
 // xplat-todo: verify below is correct
 #include <cpuid.h>
@@ -50,6 +53,12 @@ inline int get_cpuid(int cpuInfo[4], int function_id)
             reinterpret_cast<unsigned int*>(&cpuInfo[3]));
 }
 
+inline void DebugBreak()
+{
+    asm ("int3");
+    __builtin_unreachable();
+}
+
 #define _BitScanForward BitScanForward
 #define _BitScanForward64 BitScanForward64
 #define _BitScanReverse BitScanReverse
@@ -58,29 +67,7 @@ inline int get_cpuid(int cpuInfo[4], int function_id)
 #define _bittestandset BitTestAndSet
 #define _interlockedbittestandset InterlockedBitTestAndSet
 
-#ifdef PAL_STDCPP_COMPAT
-// SAL.h doesn't define these if PAL_STDCPP_COMPAT is defined
-// Apparently, some C++ headers will conflict with this-
-// not sure which ones but stubbing them out for now in linux-
-// we can revisit if we do hit a conflict
-#define __in    _SAL1_Source_(__in, (), _In_)
-#define __out   _SAL1_Source_(__out, (), _Out_)
-
-#define fclose          PAL_fclose
-#define fflush          PAL_fflush
-#define fwprintf        PAL_fwprintf
-#define wcschr          PAL_wcschr
-#define wcscmp          PAL_wcscmp
-#define wcslen          PAL_wcslen
-#define wcsncmp         PAL_wcsncmp
-#define wcsrchr         PAL_wcsrchr
-#define wcsstr          PAL_wcsstr
-#define wprintf         PAL_wprintf
-
-#define stdout          PAL_stdout
-#endif // PAL_STDCPP_COMPAT
-
-#define FILE PAL_FILE
+#define DbgRaiseAssertionFailure() __asm__ volatile("int $0x03");
 
 // These are not available in pal
 #define fwprintf_s      fwprintf
@@ -289,6 +276,8 @@ int GetCurrentThreadStackBounds(char** stackBase, char** stackEnd);
 // xplat-todo: cryptographically secure PRNG?
 errno_t rand_s(unsigned int* randomValue);
 
+#define MAXUINT32   ((uint32_t)~((uint32_t)0))
+#define MAXINT32    ((int32_t)(MAXUINT32 >> 1))
 #endif // _WIN32
 
 
@@ -318,3 +307,33 @@ errno_t rand_s(unsigned int* randomValue);
 #else
 #define _NOEXCEPT noexcept
 #endif
+
+// xplat-todo: can we get rid of this for clang?
+// Including xmmintrin.h right now creates a ton of
+// compile errors, so temporarily defining this for clang
+// to avoid including that header
+#ifndef _MSC_VER
+#define _MM_HINT_T0 3
+#endif
+
+// xplat-todo: figure out why strsafe.h includes stdio etc
+// which prevents me from directly including PAL's strsafe.h
+#ifdef __cplusplus
+#define _STRSAFE_EXTERN_C    extern "C"
+#else
+#define _STRSAFE_EXTERN_C    extern
+#endif
+
+// If you do not want to use these functions inline (and instead want to link w/ strsafe.lib), then
+// #define STRSAFE_LIB before including this header file.
+#if defined(STRSAFE_LIB)
+#define STRSAFEAPI  _STRSAFE_EXTERN_C HRESULT __stdcall
+#pragma comment(lib, "strsafe.lib")
+#elif defined(STRSAFE_LIB_IMPL)
+#define STRSAFEAPI  _STRSAFE_EXTERN_C HRESULT __stdcall
+#else
+#define STRSAFEAPI  __inline HRESULT __stdcall
+#define STRSAFE_INLINE
+#endif
+
+STRSAFEAPI StringCchPrintfW(WCHAR* pszDest, size_t cchDest, const WCHAR* pszFormat, ...);

+ 12 - 7
lib/Common/ConfigFlagsList.h

@@ -782,8 +782,10 @@ FLAGR (String,  DumpOnLeak            , "Create a dump on failed memory leak che
 #endif
 FLAGNR(Boolean, CloneInlinedPolymorphicCaches, "Clones polymorphic inline caches in inlined functions", DEFAULT_CONFIG_CloneInlinedPolymorphicCaches)
 FLAGNR(Boolean, ConcurrentRuntime     , "Enable Concurrent GC and background JIT when creating runtime", DEFAULT_CONFIG_ConcurrentRuntime)
+#if CONFIG_CONSOLE_AVAILABLE
 FLAGNR(Boolean, Console               , "Create console window in GUI app", false)
 FLAGNR(Boolean, ConsoleExitPause      , "Pause on exit when a console window is created in GUI app", false)
+#endif
 FLAGNR(Number,  ConstructorInlineThreshold      , "Maximum size in bytecodes of a constructor inline candidate with monomorphic field access", DEFAULT_CONFIG_ConstructorInlineThreshold)
 FLAGNR(Number,  ConstructorCallsRequiredToFinalizeCachedType, "Number of calls to a constructor required before the type cached in the constructor cache is finalized", DEFAULT_CONFIG_ConstructorCallsRequiredToFinalizeCachedType)
 #ifdef SECURITY_TESTING
@@ -825,7 +827,7 @@ FLAGNR(Boolean, NoDynamicProfileInMemoryCache, "Enable in-memory cache for dynam
 FLAGNR(Boolean, ProfileBasedSpeculativeJit, "Enable dynamic profile based speculative JIT", DEFAULT_CONFIG_ProfileBasedSpeculativeJit)
 FLAGNR(Number,  ProfileBasedSpeculationCap, "In the presence of dynamic profile speculative JIT is capped to this many bytecode instructions", DEFAULT_CONFIG_ProfileBasedSpeculationCap)
 #ifdef DYNAMIC_PROFILE_MUTATOR
-FLAGNR(String,  DynamicProfileMutatorDll , "Path of the mutator DLL", L"DynamicProfileMutatorImpl.dll")
+FLAGNR(String, DynamicProfileMutatorDll , "Path of the mutator DLL", CH_WSTR("DynamicProfileMutatorImpl.dll"))
 FLAGNR(String,  DynamicProfileMutator , "Type of local, temp, return, param, loop implicit flag and implicit flag. \n\t\t\t\t\ti.e local=LikelyArray_NoMissingValues_NonInts_NonFloats;temp=Int8Array;param=LikelyNumber;return=LikelyString;loopimplicitflag=ImplicitCall_ToPrimitive;implicitflag=ImplicitCall_None\n\t\t\t\t\tor pass DynamicProfileMutator:random\n\t\t\t\t\tSee DynamicProfileInfo.h for enum values", nullptr)
 #endif
 FLAGNR(Boolean, ExecuteByteCodeBufferReturnsInvalidByteCode, "Serialized byte code execution always returns SCRIPT_E_INVALID_BYTECODE", false)
@@ -963,9 +965,9 @@ FLAGNR(Number,  FaultInjectionCount   , "Injects an out of memory at the specifi
 FLAGNR(String,  FaultInjectionType    , "FaultType (flag values) -  1 (Throw), 2 (NoThrow), 4 (MarkThrow), 8 (MarkNoThrow), FFFFFFFF (All)", nullptr)
 FLAGNR(String,  FaultInjectionFilter  , "A string to restrict the fault injection, the string can be like ArenaAllocator name", nullptr)
 FLAGNR(Number,  FaultInjectionAllocSize, "Do fault injection only this size", -1)
-FLAGNR(String,  FaultInjectionStackFile   , "Stacks to match, default: stack.txt in current directory", L"stack.txt")
+FLAGNR(String, FaultInjectionStackFile   , "Stacks to match, default: stack.txt in current directory", CH_WSTR("stack.txt"))
 FLAGNR(Number,  FaultInjectionStackLineCount   , "Count of lines in the stack file used for matching", -1)
-FLAGNR(String,  FaultInjectionStackHash, "Match stacks hash on Chakra frames to inject the fault, hex string", L"0")
+FLAGNR(String, FaultInjectionStackHash, "Match stacks hash on Chakra frames to inject the fault, hex string", CH_WSTR("0"))
 FLAGNR(Number,  FaultInjectionScriptContextToTerminateCount, "Script context# COUNT % (Number of script contexts) to terminate", 1)
 #endif
 FLAGNR(Number, InduceCodeGenFailure, "Probability of a codegen job failing.", DEFAULT_CONFIG_InduceCodeGenFailure)
@@ -1043,7 +1045,8 @@ FLAGR (Number,  AutoProfilingInterpreter1Limit, "Limit after which to transition
 FLAGR (Number,  SimpleJitLimit, "Limit after which to transition to the next execution mode", DEFAULT_CONFIG_SimpleJitLimit)
 FLAGR (Number,  ProfilingInterpreter1Limit, "Limit after which to transition to the next execution mode", DEFAULT_CONFIG_ProfilingInterpreter1Limit)
 
-FLAGNRA(String, ExecutionModeLimits,        Eml,  "Execution mode limits in th form: AutoProfilingInterpreter0.ProfilingInterpreter0.AutoProfilingInterpreter1.SimpleJit.ProfilingInterpreter1 - Example: -ExecutionModeLimits:12.4.0.132.12", L"")
+FLAGNRA(String, ExecutionModeLimits,        Eml,  "Execution mode limits in th form: AutoProfilingInterpreter0.ProfilingInterpreter0.AutoProfilingInterpreter1.SimpleJit.ProfilingInterpreter1 - Example: -ExecutionModeLimits:12.4.0.132.12", CH_WSTR(""))
+
 FLAGRA(Boolean, EnforceExecutionModeLimits, Eeml, "Enforces the execution mode limits such that they are never exceeded.", false)
 
 FLAGNRA(Number, SimpleJitAfter        , Sja, "Number of calls to a function after which to simple-JIT the function", 0)
@@ -1079,12 +1082,14 @@ FLAGNR(Boolean, NoWinRTFastSig        , "Disable fast call for common WinRT func
 FLAGNR(Phases,  Off                   , "Turn off specific phases or feature.(Might not work for all phases)", )
 FLAGNR(Phases,  OffProfiledByteCode   , "Turn off specific byte code for phases or feature.(Might not work for all phases)", )
 FLAGNR(Phases,  On                    , "Turn on specific phases or feature.(Might not work for all phases)", )
-FLAGNR(String,  OutputFile            , "Log the output to a specified file. Default: output.log in the working directory.", L"output.log")
-FLAGNR(String,  OutputFileOpenMode    , "File open mode for OutputFile. Default: wt, specify 'at' for append", L"wt")
+FLAGNR(String, OutputFile            , "Log the output to a specified file. Default: output.log in the working directory.", CH_WSTR("output.log"))
+FLAGNR(String, OutputFileOpenMode    , "File open mode for OutputFile. Default: wt, specify 'at' for append", CH_WSTR("wt"))
 #ifdef ENABLE_TRACE
 FLAGNR(Boolean, InMemoryTrace         , "Enable in-memory trace (investigate crash using trace in dump file). Use !jd.dumptrace to print it.", DEFAULT_CONFIG_InMemoryTrace)
 FLAGNR(Number,  InMemoryTraceBufferSize, "The size of circular buffer for in-memory trace (the units used is: number of trace calls). ", DEFAULT_CONFIG_InMemoryTraceBufferSize)
+#if CONFIG_RICH_TRACE_FORMAT
 FLAGNR(Boolean, RichTraceFormat, "Whether to use extra data in Output/Trace header.", DEFAULT_CONFIG_RichTraceFormat)
+#endif
 #ifdef STACK_BACK_TRACE
 FLAGNR(Boolean, TraceWithStack, "Whether the trace need to include stack trace (for each trace entry).", DEFAULT_CONFIG_TraceWithStack)
 #endif // STACK_BACK_TRACE
@@ -1264,7 +1269,7 @@ FLAGNR(Boolean, ChangeTypeOnProto, "When becoming a prototype should the object
 FLAGNR(Boolean, ShareInlineCaches, "Determines whether inline caches are shared between all loads (or all stores) of the same property ID", DEFAULT_CONFIG_ShareInlineCaches)
 FLAGNR(Boolean, DisableDebugObject, "Disable test only Debug object properties", DEFAULT_CONFIG_DisableDebugObject)
 FLAGNR(Boolean, DumpHeap, "enable Debug.dumpHeap even when DisableDebugObject is set", DEFAULT_CONFIG_DumpHeap)
-FLAGNR(String, autoProxy, "enable creating proxy for each object creation", L"__msTestHandler")
+FLAGNR(String, autoProxy, "enable creating proxy for each object creation", CH_WSTR("__msTestHandler"))
 FLAGNR(Number,  PerfHintLevel, "Specifies the perf-hint level (1,2) 1 == critical, 2 == only noisy", DEFAULT_CONFIG_PerfHintLevel)
 #ifdef INTERNAL_MEM_PROTECT_HEAP_ALLOC
 FLAGNR(Boolean, MemProtectHeap, "Use the mem protect heap as the default heap", DEFAULT_CONFIG_MemProtectHeap)

+ 1 - 1
lib/Common/Core/Api.h

@@ -41,7 +41,7 @@ namespace JsUtil
         // By default, implemented in Dll\Jscript\ScriptEngine.cpp
         // Anyone who statically links with jscript.common.common.lib has to implement this
         // This is used to determine which regkey we should read while loading the configuration
-        static LPWSTR GetFeatureKeyName();
+        static LPCWSTR GetFeatureKeyName();
     };
 };
 

+ 8 - 2
lib/Common/Core/Assertions.h

@@ -12,7 +12,13 @@
 // AutoDebug functions that are only available in DEBUG builds
 _declspec(selectany) int AssertCount = 0;
 _declspec(selectany) int AssertsToConsole = false;
+
+#if _WIN32
 _declspec(thread, selectany) int IsInAssert = false;
+#else
+// xplat-todo: This is wrong but unblocking linux for now
+_declspec(selectany) int IsInAssert = false;
+#endif
 
 #if !defined(USED_IN_STATIC_LIB)
 #define REPORT_ASSERT(f, comment) Js::Throw::ReportAssert(__FILE__, __LINE__, STRINGIZE((f)), comment)
@@ -37,9 +43,9 @@ _declspec(thread, selectany) int IsInAssert = false;
             AssertCount++; \
             LOG_ASSERT(); \
             IsInAssert = TRUE; \
-            if (!REPORT_ASSERT(f, comment)) \
+            if (!REPORT_ASSERT(f, comment))      \
             { \
-                RAISE_ASSERTION(comment); \
+                RAISE_ASSERTION(comment);        \
             } \
             IsInAssert = FALSE; \
             __analysis_assume(false); \

+ 1 - 1
lib/Common/Core/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.Core
+add_library (Chakra.Common.Core STATIC
     BinaryFeatureControl.cpp
     CmdParser.cpp
     CodexAssert.cpp

+ 2 - 1
lib/Common/Core/CommonCorePch.h

@@ -7,12 +7,13 @@
 #include "CommonDefines.h"
 #include "CommonMin.h"
 
+#ifdef _MSC_VER
 #pragma warning(push)
 #if defined(PROFILE_RECYCLER_ALLOC) || defined(HEAP_TRACK_ALLOC) || defined(ENABLE_DEBUG_CONFIG_OPTIONS)
 #include <typeinfo.h>
 #endif
 #pragma warning(pop)
-
+#endif
 
 
 

+ 2 - 0
lib/Common/Core/CommonMinMax.h

@@ -4,6 +4,7 @@
 //-------------------------------------------------------------------------------------------------------
 #pragma once
 
+#ifndef USING_PAL_MINMAX
 template<class T> inline
 _Post_equal_to_(a < b ? a : b) _Post_satisfies_(return <= a && return <= b)
     const T& min(const T& a, const T& b) { return a < b ? a : b; }
@@ -11,3 +12,4 @@ _Post_equal_to_(a < b ? a : b) _Post_satisfies_(return <= a && return <= b)
 template<class T> inline
 _Post_equal_to_(a > b ? a : b) _Post_satisfies_(return >= a && return >= b)
     const T& max(const T& a, const T& b) { return a > b ? a : b; }
+#endif

+ 15 - 8
lib/Common/Core/ConfigFlagsTable.cpp

@@ -65,7 +65,7 @@ namespace Js
         this->pszValue = NULL;
     }
 
-    String::String(__in_opt LPWSTR psz)
+    String::String(__in_opt const wchar16* psz)
     {
         this->pszValue = NULL;
         Set(psz);
@@ -89,7 +89,7 @@ namespace Js
     ///----------------------------------------------------------------------------
 
     void
-    String::Set(__in_opt LPWSTR pszValue)
+    String::Set(__in_opt const wchar16* pszValue)
     {
         if(NULL != this->pszValue)
         {
@@ -283,7 +283,7 @@ namespace Js
         if ((int)parentName##Flag < FlagCount) this->flagIsParent[(int) parentName##Flag] = true;
 #include "ConfigFlagsList.h"
 #undef FLAG
-
+        
         // set all parent flags to their default (setting all child flags to their right values)
         this->SetAllParentFlagsAsDefaultValue();
     }
@@ -449,6 +449,7 @@ namespace Js
         VerifyExecutionModeLimits();
 
     #if ENABLE_DEBUG_CONFIG_OPTIONS
+    #if !DISABLE_JIT
         if(ForceDynamicProfile)
         {
             Force.Enable(DynamicProfilePhase);
@@ -457,11 +458,14 @@ namespace Js
         {
             Force.Enable(JITLoopBodyPhase);
         }
+    #endif
         if(NoDeferParse)
         {
             Off.Enable(DeferParsePhase);
         }
-
+    #endif
+        
+    #if ENABLE_DEBUG_CONFIG_OPTIONS && !DISABLE_JIT
         bool dontEnforceLimitsForSimpleJitAfterOrFullJitAfter = false;
         if((IsEnabled(MinInterpretCountFlag) || IsEnabled(MaxInterpretCountFlag)) &&
             !(IsEnabled(SimpleJitAfterFlag) || IsEnabled(FullJitAfterFlag)))
@@ -491,7 +495,7 @@ namespace Js
                     SimpleJitAfter = MinInterpretCount;
                     dontEnforceLimitsForSimpleJitAfterOrFullJitAfter = true;
                 }
-                if(IsEnabled(MinInterpretCountFlag) && IsEnabled(MinSimpleJitRunCountFlag) ||
+                if((IsEnabled(MinInterpretCountFlag) && IsEnabled(MinSimpleJitRunCountFlag)) ||
                     IsEnabled(MaxSimpleJitRunCountFlag))
                 {
                     Enable(FullJitAfterFlag);
@@ -852,7 +856,7 @@ namespace Js
     #define FLAG(type, name, ...) \
             case name##Flag : \
                 return Flag##type; \
-
+                
     #include "ConfigFlagsList.h"
 
             default:
@@ -880,7 +884,7 @@ namespace Js
             \
             case name##Flag : \
                 return reinterpret_cast<void*>(const_cast<type*>(&##name)); \
-
+            
         #include "ConfigFlagsList.h"
 
             default:
@@ -913,9 +917,12 @@ namespace Js
             case FlagNumber: \
                 Output::Print(CH_WSTR(":%d"), *GetAsNumber(name##Flag)); \
                 break; \
+            default: \
+                break; \
             }; \
             Output::Print(CH_WSTR("\n")); \
         }
+        
 #include "ConfigFlagsList.h"
 #undef FLAG
     }
@@ -1053,7 +1060,7 @@ namespace Js
 #undef FLAGDOCALLBACKPhases
 #undef FLAGCALLBACKTRUE
 #undef FLAGCALLBACKFALSE
-#undef FLAG
+#undef FLAG        
 #endif
     }
 

+ 11 - 10
lib/Common/Core/ConfigFlagsTable.h

@@ -52,9 +52,10 @@ namespace Js
     ///----------------------------------------------------------------------------
 
 
-    enum Flag
+    enum Flag: unsigned short
     {
 #define FLAG(type, name, ...) name##Flag,
+
 #include "ConfigFlagsList.h"
         FlagCount,
         InvalidFlag,
@@ -75,7 +76,7 @@ namespace Js
     ///----------------------------------------------------------------------------
 
 
-    enum Phase
+    enum Phase: unsigned short
     {
 #define PHASE(name) name##Phase,
 #include "ConfigFlagsList.h"
@@ -117,12 +118,12 @@ namespace Js
 
     // Data
     private:
-                LPWSTR           pszValue;
+        wchar16*           pszValue;
 
     // Construction
     public:
         inline String();
-        inline String(__in_opt LPWSTR psz);
+        inline String(__in_opt const wchar16* psz);
         inline ~String();
 
 
@@ -135,7 +136,7 @@ namespace Js
         ///
         ///----------------------------------------------------------------------------
 
-        String& operator=(__in_opt LPWSTR psz)
+        String& operator=(__in_opt const wchar16* psz)
         {
             Set(psz);
             return *this;
@@ -152,14 +153,14 @@ namespace Js
         ///
         ///----------------------------------------------------------------------------
 
-        operator LPCWSTR () const
+        operator const wchar16* () const
         {
             return this->pszValue;
         }
 
     // Implementation
     private:
-        void Set(__in_opt LPWSTR pszValue);
+        void Set(__in_opt const wchar16* pszValue);
     };
 
     class NumberSet
@@ -435,7 +436,7 @@ namespace Js
 
         #define FLAG(type, name, ...) \
             \
-            type name;\
+            type name;                      \
 
         #include "ConfigFlagsList.h"
 
@@ -458,8 +459,8 @@ namespace Js
 
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
                 // special callback logic
-                void        FlagSetCallback_ES6All(Boolean value);
-                void        FlagSetCallback_ES6Experimental(Boolean value);
+                void        FlagSetCallback_ES6All(Js::Boolean value);
+                void        FlagSetCallback_ES6Experimental(Js::Boolean value);
 #endif
 
     public:

+ 15 - 6
lib/Common/Core/ConfigParser.cpp

@@ -3,12 +3,14 @@
 // Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
 //-------------------------------------------------------------------------------------------------------
 #include "CommonCorePch.h"
-#ifdef _WIN32
+
+#ifndef USING_PAL_STDLIB
 #include <io.h>
 #include <share.h>
-#endif
 #include <fcntl.h>
 #include <strsafe.h>
+#endif
+
 #include "Memory/MemoryLogger.h"
 #include "Memory/ForcedMemoryConstraints.h"
 #include "Core/ICustomConfigFlags.h"
@@ -25,7 +27,7 @@ class ArenaHost
     ArenaAllocator m_allocator;
 
 public:
-    ArenaHost(__in_z wchar16* arenaName) :
+    ArenaHost(__in_z const wchar16* arenaName) :
         m_allocationPolicyManager(/* needConcurrencySupport = */ true),
         m_pageAllocator(&m_allocationPolicyManager, Js::Configuration::Global.flags),
         m_allocator(arenaName, &m_pageAllocator, Js::Throw::OutOfMemory)
@@ -308,7 +310,7 @@ void ConfigParser::ParseRegistryKey(HKEY hk, CmdLineArgsParser &parser)
 
 void ConfigParser::ParseConfig(HANDLE hmod, CmdLineArgsParser &parser)
 {
-#if defined(ENABLE_DEBUG_CONFIG_OPTIONS) || defined(PARSE_CONFIG_FILE)
+#if defined(ENABLE_DEBUG_CONFIG_OPTIONS) && CONFIG_PARSE_CONFIG_FILE
     Assert(!_hasReadConfig);
     _hasReadConfig = true;
 
@@ -360,12 +362,18 @@ void ConfigParser::ParseConfig(HANDLE hmod, CmdLineArgsParser &parser)
 
 void ConfigParser::ProcessConfiguration(HANDLE hmod)
 {
-#ifdef ENABLE_DEBUG_CONFIG_OPTIONS
+#if defined(ENABLE_DEBUG_CONFIG_OPTIONS)
     bool hasOutput = false;
     wchar16 modulename[_MAX_PATH];
 
     GetModuleFileName((HMODULE)hmod, modulename, _MAX_PATH);
 
+    // Win32 specific console creation code
+    // xplat-todo: Consider having this mechanism available on other
+    // platforms
+    // Not a pressing need since ChakraCore runs only in consoles by
+    // default so we don't need to allocate a second console for this
+#if CONFIG_CONSOLE_AVAILABLE
     if (Js::Configuration::Global.flags.Console)
     {
         int fd;
@@ -395,7 +403,8 @@ void ConfigParser::ProcessConfiguration(HANDLE hmod)
 
         hasOutput = true;
     }
-
+#endif
+    
     if (Js::Configuration::Global.flags.IsEnabled(Js::OutputFileFlag)
         && Js::Configuration::Global.flags.OutputFile != nullptr)
     {

+ 13 - 5
lib/Common/Core/Output.cpp

@@ -4,8 +4,10 @@
 //-------------------------------------------------------------------------------------------------------
 #include "CommonCorePch.h"
 
+#ifndef USING_PAL_STDLIB
 #include <string.h>
 #include <stdarg.h>
+#endif
 
 // Initialization order
 //  AB AutoSystemInfo
@@ -27,7 +29,9 @@ CriticalSection     Output::s_critsect;
 AutoFILE            Output::s_outputFile; // Create a separate output file that is not thread-local.
 #ifdef ENABLE_TRACE
 Js::ILogger*        Output::s_inMemoryLogger = nullptr;
+#ifdef STACK_BACK_TRACE
 Js::IStackTraceHelper* Output::s_stackTraceHelper = nullptr;
+#endif
 unsigned int Output::s_traceEntryId = 0;
 #endif
 
@@ -100,7 +104,7 @@ Output::TraceWithPrefix(Js::Phase phase, const wchar16 prefix[], const wchar16 *
         va_list argptr;
         va_start(argptr, form);
         WCHAR prefixValue[512];
-        swprintf_s(prefixValue, CH_WSTR("%s: %s: "), Js::PhaseNames[static_cast<int>(phase)], prefix);
+        _snwprintf_s(prefixValue, _countof(prefixValue), _TRUNCATE, CH_WSTR("%s: %s: "), Js::PhaseNames[static_cast<int>(phase)], prefix);
         retValue += Output::VTrace(CH_WSTR("%s"), prefixValue, form, argptr);
     }
 
@@ -144,17 +148,20 @@ Output::VTrace(const wchar16* shortPrefixFormat, const wchar16* prefix, const wc
 {
     size_t retValue = 0;
 
+#if CONFIG_RICH_TRACE_FORMAT
     if (CONFIG_FLAG(RichTraceFormat))
     {
         InterlockedIncrement(&s_traceEntryId);
         retValue += Output::Print(CH_WSTR("[%d ~%d %s] "), s_traceEntryId, ::GetCurrentThreadId(), prefix);
     }
     else
+#endif
     {
         retValue += Output::Print(shortPrefixFormat, prefix);
     }
     retValue += Output::VPrint(form, argptr);
 
+#ifdef STACK_BACK_TRACE
     // Print stack trace.
     if (s_stackTraceHelper)
     {
@@ -194,7 +201,8 @@ Output::VTrace(const wchar16* shortPrefixFormat, const wchar16* prefix, const wc
             retValue += s_stackTraceHelper->PrintStackTrace(c_framesToSkip, c_frameCount);
         }
     }
-
+#endif
+    
     return retValue;
 }
 
@@ -473,15 +481,15 @@ Output::SetInMemoryLogger(Js::ILogger* logger)
     s_inMemoryLogger = logger;
 }
 
+#ifdef STACK_BACK_TRACE
 void
 Output::SetStackTraceHelper(Js::IStackTraceHelper* helper)
 {
     AssertMsg(s_stackTraceHelper == nullptr, "This cannot be called more than once.");
-#ifndef STACK_BACK_TRACE
-    AssertMsg("STACK_BACK_TRACE must be defined");
-#endif
     s_stackTraceHelper = helper;
 }
+#endif
+
 #endif // ENABLE_TRACE
 
 //

+ 10 - 4
lib/Common/Core/Output.h

@@ -5,11 +5,11 @@
 #pragma once
 
 // xplat-todo: error: ISO C++ forbids forward references to 'enum' types
-#ifdef ENABLE_TRACE
+#if defined(ENABLE_TRACE) 
 namespace Js
 {
-enum Flag;
-enum Phase;
+enum Flag: unsigned short;
+enum Phase: unsigned short;
 };
 #endif
 
@@ -54,11 +54,14 @@ namespace Js
     {
         virtual void Write(const wchar16* msg) = 0;
     };
+
+#ifdef STACK_BACK_TRACE
     struct IStackTraceHelper
     {
         virtual size_t PrintStackTrace(ULONG framesToSkip, ULONG framesToCapture) = 0;  // Returns # of chars printed.
         virtual ULONG GetStackTrace(ULONG framesToSkip, ULONG framesToCapture, void** stackFrames) = 0; // Returns # of frames captured.
     };
+#endif
 } // namespace Js.
 
 
@@ -89,9 +92,12 @@ public:
         }
 
         return retValue;
-    }
+    }    
     static void     SetInMemoryLogger(Js::ILogger* logger);
+#ifdef STACK_BACK_TRACE
     static void     SetStackTraceHelper(Js::IStackTraceHelper* helper);
+#endif
+    
 #endif // ENABLE_TRACE
     static size_t __cdecl Print(const wchar16 *form, ...);
     static size_t __cdecl Print(int column, const wchar16 *form, ...);

+ 2 - 2
lib/Common/Core/ProfileInstrument.h

@@ -164,8 +164,8 @@ namespace Js
 #define ASYNC_HOST_OPERATION_START(threadContext) {Js::Profiler::SuspendRecord __suspendRecord;  bool wasInAsync = threadContext->AsyncHostOperationStart(&__suspendRecord)
 #define ASYNC_HOST_OPERATION_END(threadContext) threadContext->AsyncHostOperationEnd(wasInAsync, &__suspendRecord); }
 #elif DBG
-#define ASYNC_HOST_OPERATION_START(threadContext) { bool wasInAsync = threadContext->AsyncHostOperationStart(null)
-#define ASYNC_HOST_OPERATION_END(threadContext) threadContext->AsyncHostOperationEnd(wasInAsync, null)
+#define ASYNC_HOST_OPERATION_START(threadContext) { bool wasInAsync = threadContext->AsyncHostOperationStart(nullptr)
+#define ASYNC_HOST_OPERATION_END(threadContext) threadContext->AsyncHostOperationEnd(wasInAsync, nullptr); }
 #else
 #define ASYNC_HOST_OPERATION_START(threadContext)
 #define ASYNC_HOST_OPERATION_END(threadContext)

+ 6 - 1
lib/Common/Core/ProfileMemory.cpp

@@ -15,7 +15,12 @@ CriticalSection MemoryProfiler::s_cs;
 AutoPtr<MemoryProfiler, NoCheckHeapAllocator> MemoryProfiler::profilers(nullptr);
 
 MemoryProfiler::MemoryProfiler() :
-    pageAllocator(nullptr, Js::Configuration::Global.flags, PageAllocatorType_Max, 0, false, nullptr),
+    pageAllocator(nullptr, Js::Configuration::Global.flags,
+    PageAllocatorType_Max, 0, false
+#if ENABLE_BACKGROUND_PAGE_FREEING
+        , nullptr
+#endif
+        ),
     alloc(CH_WSTR("MemoryProfiler"), &pageAllocator, Js::Throw::OutOfMemory),
     arenaDataMap(&alloc, 10)
 {

+ 42 - 6
lib/Common/Core/SysInfo.cpp

@@ -24,14 +24,46 @@
 #pragma warning(disable:4075)       // initializers put in unrecognized initialization area on purpose
 #pragma init_seg(".CRT$XCAB")
 
+#if SYSINFO_IMAGE_BASE_AVAILABLE
 EXTERN_C IMAGE_DOS_HEADER __ImageBase;
+#endif
+
+AutoSystemInfo AutoSystemInfo::Data INIT_PRIORITY(300);
+
+#if DBG
+bool
+AutoSystemInfo::IsInitialized()
+{
+    return AutoSystemInfo::Data.initialized;
+}
+#endif
 
-AutoSystemInfo AutoSystemInfo::Data;
+bool
+AutoSystemInfo::ShouldQCMoreFrequently()
+{
+    return Data.shouldQCMoreFrequently;
+}
+
+bool
+AutoSystemInfo::SupportsOnlyMultiThreadedCOM()
+{
+    return Data.supportsOnlyMultiThreadedCOM;
+}
+
+bool
+AutoSystemInfo::IsLowMemoryDevice()
+{
+    return Data.isLowMemoryDevice;
+}
 
 void
 AutoSystemInfo::Initialize()
 {
     Assert(!initialized);
+#ifndef _WIN32
+    PAL_InitializeChakraCore("/home/hiteshk/code/core/BuildLinux/bin/GCStress/GCStress");
+#endif
+
     processHandle = GetCurrentProcess();
     GetSystemInfo(this);
 
@@ -50,14 +82,17 @@ AutoSystemInfo::Initialize()
 
     binaryName[0] = L'\0';
 
+#if SYSINFO_IMAGE_BASE_AVAILABLE
     dllLoadAddress = (UINT_PTR)&__ImageBase;
     dllHighAddress = (UINT_PTR)&__ImageBase +
         ((PIMAGE_NT_HEADERS)(((char *)&__ImageBase) + __ImageBase.e_lfanew))->OptionalHeader.SizeOfImage;
-
+#endif
+    
     InitPhysicalProcessorCount();
 #if DBG
     initialized = true;
 #endif
+
     WCHAR DisableDebugScopeCaptureFlag[MAX_PATH];
     if (::GetEnvironmentVariable(CH_WSTR("JS_DEBUG_SCOPE"), DisableDebugScopeCaptureFlag, _countof(DisableDebugScopeCaptureFlag)) != 0)
     {
@@ -67,7 +102,7 @@ AutoSystemInfo::Initialize()
     {
         disableDebugScopeCapture = false;
     }
-
+    
     this->shouldQCMoreFrequently = false;
     this->supportsOnlyMultiThreadedCOM = false;
     this->isLowMemoryDevice = false;
@@ -141,25 +176,26 @@ AutoSystemInfo::InitPhysicalProcessorCount()
     return true;
 }
 
+#if SYSINFO_IMAGE_BASE_AVAILABLE
 bool
 AutoSystemInfo::IsJscriptModulePointer(void * ptr)
 {
     return ((UINT_PTR)ptr >= Data.dllLoadAddress && (UINT_PTR)ptr < Data.dllHighAddress);
 }
-
+#endif
 
 uint
 AutoSystemInfo::GetAllocationGranularityPageCount() const
 {
     Assert(initialized);
-    return allocationGranularityPageCount;
+    return this->allocationGranularityPageCount;
 }
 
 uint
 AutoSystemInfo::GetAllocationGranularityPageSize() const
 {
     Assert(initialized);
-    return allocationGranularityPageCount * PageSize;
+    return this->allocationGranularityPageCount * PageSize;
 }
 
 #if defined(_M_IX86) || defined(_M_X64)

+ 10 - 15
lib/Common/Core/SysInfo.h

@@ -45,9 +45,11 @@ public:
     static LPCWSTR GetJscriptDllFileName();
     static HRESULT GetJscriptFileVersion(DWORD* majorVersion, DWORD* minorVersion, DWORD *buildDateHash = nullptr, DWORD *buildTimeHash = nullptr);
 #if DBG
-    static bool IsInitialized() { return AutoSystemInfo::Data.initialized; }
+    static bool IsInitialized();
 #endif
+#if SYSINFO_IMAGE_BASE_AVAILABLE
     static bool IsJscriptModulePointer(void * ptr);
+#endif
     static DWORD const PageSize = 4096;
 
 #ifdef STACK_ALIGN
@@ -64,8 +66,11 @@ public:
 # endif
 #endif
 
+#if SYSINFO_IMAGE_BASE_AVAILABLE
     UINT_PTR dllLoadAddress;
     UINT_PTR dllHighAddress;
+#endif
+    
 private:
     AutoSystemInfo() : majorVersion(0), minorVersion(0), buildDateHash(0), buildTimeHash(0) { Initialize(); }
     void Initialize();
@@ -108,20 +113,9 @@ private:
     bool isLowMemoryDevice;
 
 public:
-    static bool ShouldQCMoreFrequently()
-    {
-        return Data.shouldQCMoreFrequently;
-    }
-
-    static bool SupportsOnlyMultiThreadedCOM()
-    {
-        return Data.supportsOnlyMultiThreadedCOM;
-    }
-
-    static bool IsLowMemoryDevice()
-    {
-        return Data.isLowMemoryDevice;
-    }
+    static bool ShouldQCMoreFrequently();
+    static bool SupportsOnlyMultiThreadedCOM();
+    static bool IsLowMemoryDevice();
 };
 
 
@@ -129,3 +123,4 @@ public:
 CompileAssert(AutoSystemInfo::PageSize == 4096);
 #define __in_ecount_pagesize __in_ecount(4096)
 #define __in_ecount_twopagesize __in_ecount(8192)
+

+ 3 - 3
lib/Common/DataStructures/BufferBuilder.cpp

@@ -17,12 +17,12 @@ BufferBuilder::TraceOutput(byte * buffer, uint32 size) const
 {
     if (PHASE_TRACE1(Js::ByteCodeSerializationPhase))
     {
-        Output::Print(L"%08X: %-40s:", this->offset, this->clue);
+        Output::Print(CH_WSTR("%08X: %-40s:"), this->offset, this->clue);
         for (uint i = 0; i < size; i ++)
         {
-            Output::Print(L" %02x", buffer[this->offset + i]);
+            Output::Print(CH_WSTR(" %02x"), buffer[this->offset + i]);
         }
-        Output::Print(L"\n");
+        Output::Print(CH_WSTR("\n"));
     }
 }
 #endif

+ 1 - 1
lib/Common/DataStructures/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.DataStructures
+add_library (Chakra.Common.DataStructures STATIC
     BigInt.cpp
     BufferBuilder.cpp
     CommonDataStructuresPch.cpp

+ 4 - 2
lib/Common/DataStructures/CommonDataStructuresPch.h

@@ -7,15 +7,17 @@
 #include "CommonMinMemory.h"
 
 // === C Runtime Header Files ===
-#ifdef _WIN32
+#ifndef USING_PAL_STDLIB
 #include <wchar.h>
-#endif
 
 #if defined(_UCRT)
 #include <cmath>
 #else
 #include <math.h>
 #endif
+#else
+#include "CommonPal.h"
+#endif
 
 // === Codex Header Files ===
 #include "Codex/Utf8Codex.h"

+ 2 - 2
lib/Common/DataStructures/DList.h

@@ -128,8 +128,8 @@ public:
         template <typename TAllocator>
         void RemoveCurrent(TAllocator * allocator)
         {
-            Assert(current != nullptr);
-            Assert(!list->IsHead(current));
+            Assert(this->current != nullptr);
+            Assert(!this->list->IsHead(this->current));
 
             NodeBase * last = this->current->Prev();
             NodeBase * node = const_cast<NodeBase *>(this->current);

+ 2 - 2
lib/Common/DataStructures/FixedBitVector.cpp

@@ -349,11 +349,11 @@ void
 BVFixed::Dump() const
 {
     bool hasBits = false;
-    Output::Print(L"[  ");
+    Output::Print(CH_WSTR("[  "));
     for(BVIndex i=0; i < this->WordCount(); i++)
     {
         hasBits = this->data[i].Dump(i * BVUnit::BitsPerWord, hasBits);
     }
-    Output::Print(L"]\n");
+    Output::Print(CH_WSTR("]\n"));
 }
 #endif

+ 0 - 1
lib/Common/DataStructures/ImmutableList.cpp

@@ -3,7 +3,6 @@
 // Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
 //-------------------------------------------------------------------------------------------------------
 #include "CommonDataStructuresPch.h"
-#include <strsafe.h>
 #include "Option.h"
 #include "ImmutableList.h"
 

+ 17 - 17
lib/Common/DataStructures/List.h

@@ -214,8 +214,8 @@ namespace JsUtil
         TRemovePolicyType removePolicy;
 
         template <bool isLeaf> T * AllocArray(int size);
-        template <> T * AllocArray<true>(int size) { return AllocatorNewArrayLeaf(TAllocator, alloc, T, size); }
-        template <> T * AllocArray<false>(int size) { return AllocatorNewArray(TAllocator, alloc, T, size); }
+        template <> T * AllocArray<true>(int size) { return AllocatorNewArrayLeaf(TAllocator, this->alloc, T, size); }
+        template <> T * AllocArray<false>(int size) { return AllocatorNewArray(TAllocator, this->alloc, T, size); }
 
         PREVENT_COPY(List); // Disable copy constructor and operator=
 
@@ -237,15 +237,15 @@ namespace JsUtil
 
         void EnsureArray(int32 requiredCapacity)
         {
-            if (buffer == nullptr)
+            if (this->buffer == nullptr)
             {
                 int32 newSize = max(requiredCapacity, increment);
 
-                buffer = AllocArray<isLeaf>(newSize);
-                count = 0;
-                length = newSize;
+                this->buffer = AllocArray<isLeaf>(newSize);
+                this->count = 0;
+                this->length = newSize;
             }
-            else if (count == length || requiredCapacity > length)
+            else if (this->count == length || requiredCapacity > this->length)
             {
                 int32 newLength = 0, newBufferSize = 0, oldBufferSize = 0;
 
@@ -265,13 +265,13 @@ namespace JsUtil
 
                 T* newbuffer = AllocArray<isLeaf>(newLength);
 
-                js_memcpy_s(newbuffer, newBufferSize, buffer, oldBufferSize);
+                js_memcpy_s(newbuffer, newBufferSize, this->buffer, oldBufferSize);
 
                 auto freeFunc = AllocatorInfo::GetFreeFunc();
-                AllocatorFree(this->alloc, freeFunc, buffer, oldBufferSize);
+                AllocatorFree(this->alloc, freeFunc, this->buffer, oldBufferSize);
 
-                length = newLength;
-                buffer = newbuffer;
+                this->length = newLength;
+                this->buffer = newbuffer;
             }
         }
 
@@ -317,8 +317,8 @@ namespace JsUtil
 
         T& Item(int index)
         {
-            Assert(index >= 0 && index < count);
-            return buffer[index];
+            Assert(index >= 0 && index < this->count);
+            return this->buffer[index];
         }
 
         T& Last()
@@ -376,16 +376,16 @@ namespace JsUtil
                 return Add(item);
             }
 
-            buffer[indexToSetAt] = item;
+            this->buffer[indexToSetAt] = item;
             return indexToSetAt;
         }
 
         int Add(const T& item)
         {
             EnsureArray();
-            buffer[count] = item;
-            int pos = count;
-            count++;
+            this->buffer[this->count] = item;
+            int pos = this->count;
+            this->count++;
             return pos;
         }
 

+ 4 - 1
lib/Common/DataStructures/PageStack.h

@@ -38,7 +38,10 @@ public:
 #endif
 
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
-    void SetMaxPageCount(size_t maxPageCount) { this->maxPageCount = max<size_t>(maxPageCount, 1); }
+    void SetMaxPageCount(size_t maxPageCount)
+    {
+        this->maxPageCount = maxPageCount > 1 ? maxPageCount : 1;
+    }
 #endif
 
     static const uint MaxSplitTargets = 3;     // Not counting original stack, so this supports 4-way parallel

+ 0 - 74
lib/Common/DataStructures/SparseBitVector.cpp

@@ -16,77 +16,3 @@ void BVSparseNode::init(BVIndex beginIndex, BVSparseNode * nextNode)
     this->data = 0;
     this->next = nextNode;
 }
-
-bool BVSparseNode::ToString(
-    __out_ecount(strSize) char *const str,
-    const size_t strSize,
-    size_t *const writtenLengthRef,
-    const bool isInSequence,
-    const bool isFirstInSequence,
-    const bool isLastInSequence) const
-{
-    Assert(str);
-    Assert(!isFirstInSequence || isInSequence);
-    Assert(!isLastInSequence || isInSequence);
-
-    if (strSize == 0)
-    {
-        if (writtenLengthRef)
-        {
-            *writtenLengthRef = 0;
-        }
-        return false;
-    }
-    str[0] = '\0';
-
-    const size_t reservedLength = _countof(", ...}");
-    if (strSize <= reservedLength)
-    {
-        if (writtenLengthRef)
-        {
-            *writtenLengthRef = 0;
-        }
-        return false;
-    }
-
-    size_t length = 0;
-    if (!isInSequence || isFirstInSequence)
-    {
-        str[length++] = '{';
-    }
-
-    bool insertComma = isInSequence && !isFirstInSequence;
-    char tempStr[13];
-    for (BVIndex i = data.GetNextBit(); i != BVInvalidIndex; i = data.GetNextBit(i + 1))
-    {
-        const size_t copyLength = sprintf_s(tempStr, insertComma ? ", %u" : "%u", startIndex + i);
-        Assert(static_cast<int>(copyLength) > 0);
-
-        Assert(strSize > length);
-        Assert(strSize - length > reservedLength);
-        if (strSize - length - reservedLength <= copyLength)
-        {
-            strcpy_s(&str[length], strSize - length, insertComma ? ", ...}" : "...}");
-            if (writtenLengthRef)
-            {
-                *writtenLengthRef = length + (insertComma ? _countof(", ...}") : _countof("...}"));
-            }
-            return false;
-        }
-
-        strcpy_s(&str[length], strSize - length - reservedLength, tempStr);
-        length += copyLength;
-        insertComma = true;
-    }
-    if (!isInSequence || isLastInSequence)
-    {
-        Assert(_countof("}") < strSize - length);
-        strcpy_s(&str[length], strSize - length, "}");
-        length += _countof("}");
-    }
-    if (writtenLengthRef)
-    {
-        *writtenLengthRef = length;
-    }
-    return true;
-}

+ 2 - 77
lib/Common/DataStructures/SparseBitVector.h

@@ -77,13 +77,6 @@ struct BVSparseNode
     BVSparseNode(BVIndex beginIndex, BVSparseNode * nextNode);
 
     void init(BVIndex beginIndex, BVSparseNode * nextNode);
-    bool ToString(
-        __out_ecount(strSize) char *const str,
-        const size_t strSize,
-        size_t *const writtenLengthRef = nullptr,
-        const bool isInSequence = false,
-        const bool isFirstInSequence = false,
-        const bool isLastInSequence = false) const;
 };
 
 // xplat-todo: revisit for unix
@@ -189,9 +182,6 @@ public:
             // this & bv != empty
             bool            Test(BVSparse const * bv) const;
 
-            void            ToString(__out_ecount(strSize) char *const str, const size_t strSize) const;
-            template<class F> void ToString(__out_ecount(strSize) char *const str, const size_t strSize, const F ReadNode) const;
-
             TAllocator *    GetAllocator() const { return alloc; }
 #if DBG_DUMP
             void            Dump() const;
@@ -894,71 +884,6 @@ BVSparse<TAllocator>::Test(BVSparse const * bv) const
     return false;
 }
 
-template<class TAllocator>
-template<class F>
-void BVSparse<TAllocator>::ToString(__out_ecount(strSize) char *const str, const size_t strSize, const F ReadNode) const
-{
-    Assert(str);
-
-    if(strSize == 0)
-    {
-        return;
-    }
-    str[0] = '\0';
-
-    bool empty = true;
-    bool isFirstInSequence = true;
-    size_t length = 0;
-    BVSparseNode *nodePtr = head;
-    while(nodePtr)
-    {
-        bool readSuccess;
-        const BVSparseNode node(ReadNode(nodePtr, &readSuccess));
-        if(!readSuccess)
-        {
-            str[0] = '\0';
-            return;
-        }
-        if(node.data.IsEmpty())
-        {
-            nodePtr = node.next;
-            continue;
-        }
-        empty = false;
-
-        size_t writtenLength;
-        if(!node.ToString(&str[length], strSize - length, &writtenLength, true, isFirstInSequence, !node.next))
-        {
-            return;
-        }
-        length += writtenLength;
-
-        isFirstInSequence = false;
-        nodePtr = node.next;
-    }
-
-    if(empty && _countof("{}") < strSize)
-    {
-        strcpy_s(str, strSize, "{}");
-    }
-}
-
-template<class TAllocator>
-void BVSparse<TAllocator>::ToString(__out_ecount(strSize) char *const str, const size_t strSize) const
-{
-    ToString(
-        str,
-        strSize,
-        [](BVSparseNode *const nodePtr, bool *const successRef) -> BVSparseNode
-        {
-            Assert(nodePtr);
-            Assert(successRef);
-
-            *successRef = true;
-            return *nodePtr;
-        });
-}
-
 #if DBG_DUMP
 
 template <class TAllocator>
@@ -966,11 +891,11 @@ void
 BVSparse<TAllocator>::Dump() const
 {
     bool hasBits = false;
-    Output::Print(L"[  ");
+    Output::Print(CH_WSTR("[  "));
     for(BVSparseNode * node = this->head; node != 0 ; node = node->next)
     {
         hasBits = node->data.Dump(node->startIndex, hasBits);
     }
-    Output::Print(L"]\n");
+    Output::Print(CH_WSTR("]\n"));
 }
 #endif

+ 5 - 5
lib/Common/DataStructures/UnitBitVector.h

@@ -461,7 +461,7 @@ public:
 #if DBG_DUMP || defined(ENABLE_IR_VIEWER)
     void DumpWord()
     {
-        Output::Print(L"%p", this->word);
+        Output::Print(CH_WSTR("%p"), this->word);
     }
 
     bool Dump(BVIndex base = 0, bool hasBits = false) const
@@ -470,9 +470,9 @@ public:
         {
             if (hasBits)
             {
-                Output::Print(L", ");
+                Output::Print(CH_WSTR(", "));
             }
-            Output::Print(L"%u", index + base);
+            Output::Print(CH_WSTR("%u"), index + base);
             hasBits = true;
         }
         NEXT_BITSET_IN_UNITBV;
@@ -484,8 +484,8 @@ public:
 typedef BVUnitT<UnitWord32> BVUnit32;
 typedef BVUnitT<UnitWord64> BVUnit64;
 
-template<> const LONG BVUnitT<UnitWord32>::ShiftValue = 5;
-template<> const LONG BVUnitT<UnitWord64>::ShiftValue = 6;
+template<> const __declspec(selectany) LONG BVUnitT<UnitWord32>::ShiftValue = 5;
+template<> const __declspec(selectany) LONG BVUnitT<UnitWord64>::ShiftValue = 6;
 
 #if defined(_M_X64_OR_ARM64)
     typedef BVUnit64 BVUnit;

+ 2 - 2
lib/Common/DataStructures/WeakReferenceDictionary.h

@@ -20,8 +20,8 @@ namespace JsUtil
     class WeakReferenceDictionary: public BaseDictionary<TKey, RecyclerWeakReference<TValue>*, RecyclerNonLeafAllocator, SizePolicy, Comparer, WeakRefValueDictionaryEntry>,
                                    public IWeakReferenceDictionary
     {
-        typedef BaseDictionary<TKey, RecyclerWeakReference<TValue>*, RecyclerNonLeafAllocator, SizePolicy, Comparer, WeakRefValueDictionaryEntry>
-                Base;
+        typedef BaseDictionary<TKey, RecyclerWeakReference<TValue>*, RecyclerNonLeafAllocator, SizePolicy, Comparer, WeakRefValueDictionaryEntry> Base;
+        
     public:
         WeakReferenceDictionary(Recycler* recycler, int capacity = 0):
           BaseDictionary(recycler, capacity)

+ 1 - 1
lib/Common/Exceptions/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.Exceptions
+add_library (Chakra.Common.Exceptions STATIC
     # CommonExceptionsPch.cpp
     ExceptionCheck.cpp
     ExceptionBase.cpp

+ 4 - 1
lib/Common/Exceptions/ReportError.cpp

@@ -4,7 +4,10 @@
 //-------------------------------------------------------------------------------------------------------
 #include "CommonExceptionsPch.h"
 
-__inline void ReportFatalException(
+#ifdef _MSC_VER
+__inline
+#endif
+void ReportFatalException(
     __in ULONG_PTR context,
     __in HRESULT exceptionCode,
     __in ErrorReason reasonCode,

+ 10 - 4
lib/Common/Exceptions/Throw.cpp

@@ -4,11 +4,14 @@
 //-------------------------------------------------------------------------------------------------------
 
 #include "CommonExceptionsPch.h"
+
+#ifndef USING_PAL_STDLIB
 // === C Runtime Header Files ===
 #pragma warning(push)
 #pragma warning(disable: 4995) /* 'function': name was marked as #pragma deprecated */
 #include <strsafe.h>
 #pragma warning(pop)
+#endif
 
 #include "StackOverflowException.h"
 #include "AsmJsParseException.h"
@@ -85,7 +88,7 @@ namespace Js {
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
         if (CONFIG_FLAG(PrintSystemException))
         {
-            Output::Print(L"SystemException: OutOfMemory\n");
+            Output::Print(CH_WSTR("SystemException: OutOfMemory\n"));
             Output::Flush();
         }
 #endif
@@ -107,7 +110,7 @@ namespace Js {
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
         if (CONFIG_FLAG(PrintSystemException))
         {
-            Output::Print(L"SystemException: StackOverflow\n");
+            Output::Print(CH_WSTR("SystemException: StackOverflow\n"));
             Output::Flush();
         }
 #endif
@@ -253,7 +256,7 @@ namespace Js {
     }
 
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
-    static const wchar_t * caption = L"CHAKRA ASSERT";
+    static const wchar_t * caption = CH_WSTR("CHAKRA ASSERT");
 #endif
 
     bool Throw::ReportAssert(__in LPSTR fileName, uint lineNumber, __in LPSTR error, __in LPSTR message)
@@ -284,7 +287,10 @@ namespace Js {
             return false;
 #endif
         }
-#ifdef ENABLE_DEBUG_CONFIG_OPTIONS
+
+        // The following code is applicable only when we are hosted in an
+        // GUI environment 
+#if defined(ENABLE_DEBUG_CONFIG_OPTIONS) && defined(_WIN32)
         // Then if DumpOncrashFlag is not specified it directly returns,
         // otherwise if will raise a non-continuable exception, generate the dump and terminate the process.
         // the popup message box might be useful when testing in IE

+ 2 - 1
lib/Common/Exceptions/Throw.h

@@ -23,9 +23,10 @@ namespace Js {
         static void __declspec(noreturn) FatalProjectionError();
 
         static void CheckAndThrowOutOfMemory(BOOLEAN status);
-#ifdef GENERATE_DUMP
+
         static bool ReportAssert(__in LPSTR fileName, uint lineNumber, __in LPSTR error, __in LPSTR message);
         static void LogAssert();
+#ifdef GENERATE_DUMP
         static int GenerateDump(PEXCEPTION_POINTERS exceptInfo, LPCWSTR filePath, int ret = EXCEPTION_CONTINUE_SEARCH, bool needLock = false);
         static void GenerateDump(LPCWSTR filePath, bool terminate = false, bool needLock = false);
         static void GenerateDumpForAssert(LPCWSTR filePath);

+ 9 - 0
lib/Common/Memory/ArenaAllocator.cpp

@@ -6,6 +6,8 @@
 
 #define ASSERT_THREAD() AssertMsg(this->pageAllocator->ValidThreadAccess(), "Arena allocation should only be used by a single thread")
 
+const uint Memory::StandAloneFreeListPolicy::MaxEntriesGrowth;
+
 template __forceinline BVSparseNode * BVSparse<JitArenaAllocator>::NodeFromIndex(BVIndex i, BVSparseNode *** prevNextFieldOut, bool create);
 
 ArenaData::ArenaData(PageAllocator * pageAllocator) :
@@ -744,8 +746,15 @@ void * InPlaceFreeListPolicy::Allocate(void * policy, size_t size)
         freeObjectLists[index] = freeObject->next;
 
 #ifdef ARENA_MEMORY_VERIFY
+#ifndef _MSC_VER
+#pragma clang diagnostic push
+#pragma clang diagnostic ignored "-Wsizeof-pointer-memaccess"
+#endif
         // Make sure the next pointer bytes are also DbgFreeMemFill-ed.
         memset(freeObject, DbgFreeMemFill, sizeof(freeObject->next));
+#ifndef _MSC_VER
+#pragma clang diagnostic pop
+#endif
 #endif
     }
 

+ 2 - 1
lib/Common/Memory/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.Memory
+add_library (Chakra.Common.Memory STATIC
     # xplat-todo: Include platform\XDataAllocator.cpp
     # Needed on windows, need a replacement for linux to do
     # amd64 stack walking
@@ -39,6 +39,7 @@ add_library (Chakra.Common.Memory
     SmallNormalHeapBucket.cpp
     StressTest.cpp
     VirtualAllocWrapper.cpp
+    amd64/amd64_SAVE_REGISTERS.S
     )
 
 include_directories(..)

+ 4 - 0
lib/Common/Memory/CommonMemoryPch.h

@@ -9,6 +9,7 @@
 typedef _Return_type_success_(return >= 0) LONG NTSTATUS;
 #define NT_SUCCESS(Status) (((NTSTATUS)(Status)) >= 0)
 
+#ifndef USING_PAL_STDLIB
 // === C Runtime Header Files ===
 #include <time.h>
 #if defined(_UCRT)
@@ -16,6 +17,7 @@ typedef _Return_type_success_(return >= 0) LONG NTSTATUS;
 #else
 #include <math.h>
 #endif
+#endif
 
 // Exceptions
 #include "Exceptions/ExceptionBase.h"
@@ -32,11 +34,13 @@ typedef _Return_type_success_(return >= 0) LONG NTSTATUS;
 #include "Core/ProfileMemory.h"
 #include "Core/StackBackTrace.h"
 
+#ifdef _MSC_VER
 #pragma warning(push)
 #if defined(PROFILE_RECYCLER_ALLOC) || defined(HEAP_TRACK_ALLOC) || defined(ENABLE_DEBUG_CONFIG_OPTIONS)
 #include <typeinfo.h>
 #endif
 #pragma warning(pop)
+#endif
 
 // Inl files
 #include "Memory/Recycler.inl"

+ 33 - 33
lib/Common/Memory/CustomHeap.cpp

@@ -205,8 +205,8 @@ Allocation* Heap::Alloc(size_t bytes, ushort pdataCount, ushort xdataSize, bool
         return allocation;
     }
 
-    VerboseHeapTrace(L"Bucket is %d\n", bucket);
-    VerboseHeapTrace(L"Requested: %d bytes. Allocated: %d bytes\n", bytes, bytesToAllocate);
+    VerboseHeapTrace(CH_WSTR("Bucket is %d\n"), bucket);
+    VerboseHeapTrace(CH_WSTR("Requested: %d bytes. Allocated: %d bytes\n"), bytes, bytesToAllocate);
 
     Page* page = nullptr;
     if(!this->buckets[bucket].Empty())
@@ -289,7 +289,7 @@ BOOL Heap::ProtectAllocation(__in Allocation* allocation, DWORD dwVirtualProtect
 #if DBG_DUMP || defined(RECYCLER_TRACE)
         if (Js::Configuration::Global.flags.IsEnabled(Js::TraceProtectPagesFlag))
         {
-            Output::Print(L"Protecting large allocation\n");
+            Output::Print(CH_WSTR("Protecting large allocation\n"));
         }
 #endif
         segment = allocation->largeObjectAllocation.segment;
@@ -309,7 +309,7 @@ BOOL Heap::ProtectAllocation(__in Allocation* allocation, DWORD dwVirtualProtect
             pageCount = allocation->GetPageCount();
         }
 
-        VerboseHeapTrace(L"Protecting 0x%p with 0x%x\n", address, dwVirtualProtectFlags);
+        VerboseHeapTrace(CH_WSTR("Protecting 0x%p with 0x%x\n"), address, dwVirtualProtectFlags);
         return this->ProtectPages(address, pageCount, segment, dwVirtualProtectFlags, desiredOldProtectFlag);
     }
     else
@@ -317,14 +317,14 @@ BOOL Heap::ProtectAllocation(__in Allocation* allocation, DWORD dwVirtualProtect
 #if DBG_DUMP || defined(RECYCLER_TRACE)
         if (Js::Configuration::Global.flags.IsEnabled(Js::TraceProtectPagesFlag))
         {
-            Output::Print(L"Protecting small allocation\n");
+            Output::Print(CH_WSTR("Protecting small allocation\n"));
         }
 #endif
         segment = allocation->page->segment;
         address = allocation->page->address;
         pageCount = 1;
 
-        VerboseHeapTrace(L"Protecting 0x%p with 0x%x\n", address, dwVirtualProtectFlags);
+        VerboseHeapTrace(CH_WSTR("Protecting 0x%p with 0x%x\n"), address, dwVirtualProtectFlags);
         return this->ProtectPages(address, pageCount, segment, dwVirtualProtectFlags, desiredOldProtectFlag);
     }
 }
@@ -427,7 +427,7 @@ Allocation* Heap::AllocLargeObject(size_t bytes, ushort pdataCount, ushort xdata
             bool transfer = currentPage->segment == segment;
             if(transfer)
             {
-                VerboseHeapTrace(L"Moving page from bucket %d to full list because no XDATA allocations can be made\n", currentPage->currentBucket);
+                VerboseHeapTrace(CH_WSTR("Moving page from bucket %d to full list because no XDATA allocations can be made\n"), currentPage->currentBucket);
             }
             return transfer;
         } , this->buckets, this->fullPages);
@@ -442,7 +442,7 @@ void Heap::FreeDecommittedLargeObjects()
     Assert(inDtor);
     FOREACH_DLISTBASE_ENTRY_EDITING(Allocation, allocation, &this->decommittedLargeObjects, largeObjectIter)
     {
-        VerboseHeapTrace(L"Decommitting large object at address 0x%p of size %u\n", allocation.address, allocation.size);
+        VerboseHeapTrace(CH_WSTR("Decommitting large object at address 0x%p of size %u\n"), allocation.address, allocation.size);
 
         this->ReleaseDecommitted(allocation.address, allocation.GetPageCount(), allocation.largeObjectAllocation.segment);
 
@@ -558,18 +558,18 @@ Allocation* Heap::AllocInPage(Page* page, size_t bytes, ushort pdataCount, ushor
 #endif
 
     page->freeBitVector.ClearRange(index, length);
-    VerboseHeapTrace(L"ChunkSize: %d, Index: %d, Free bit vector in page: ", length, index);
+    VerboseHeapTrace(CH_WSTR("ChunkSize: %d, Index: %d, Free bit vector in page: "), length, index);
 
 #if VERBOSE_HEAP
     page->freeBitVector.DumpWord();
 #endif
-    VerboseHeapTrace(L"\n");
+    VerboseHeapTrace(CH_WSTR("\n"));
 
 
     if (this->ShouldBeInFullList(page))
     {
         BucketId bucket = page->currentBucket;
-        VerboseHeapTrace(L"Moving page from bucket %d to full list\n", bucket);
+        VerboseHeapTrace(CH_WSTR("Moving page from bucket %d to full list\n"), bucket);
 
         this->buckets[bucket].MoveElementTo(page, &this->fullPages[bucket]);
     }
@@ -585,7 +585,7 @@ Allocation* Heap::AllocInPage(Page* page, size_t bytes, ushort pdataCount, ushor
             bool transfer = currentPage->segment == page->segment;
             if(transfer)
             {
-                VerboseHeapTrace(L"Moving page from bucket %d to full list because no XDATA allocations can be made\n", page->currentBucket);
+                VerboseHeapTrace(CH_WSTR("Moving page from bucket %d to full list because no XDATA allocations can be made\n"), page->currentBucket);
             }
             return transfer;
         } , this->buckets, this->fullPages);
@@ -610,7 +610,7 @@ Heap::EnsurePreReservedPageAllocation(PreReservedVirtualAllocWrapper * preReserv
 
         if (preReservedRegionStartAddress == nullptr)
         {
-            VerboseHeapTrace(L"PRE-RESERVE: PreReserved Segment CANNOT be allocated \n");
+            VerboseHeapTrace(CH_WSTR("PRE-RESERVE: PreReserved Segment CANNOT be allocated \n"));
         }
         return preReservedRegionStartAddress;
 }
@@ -629,7 +629,7 @@ Page* Heap::AllocNewPage(BucketId bucket, bool canAllocInPreReservedHeapPageSegm
 
             if (address == nullptr)
             {
-                VerboseHeapTrace(L"PRE-RESERVE: PreReserved Segment CANNOT be allocated \n");
+                VerboseHeapTrace(CH_WSTR("PRE-RESERVE: PreReserved Segment CANNOT be allocated \n"));
             }
         }
 
@@ -643,7 +643,7 @@ Page* Heap::AllocNewPage(BucketId bucket, bool canAllocInPreReservedHeapPageSegm
         }
         else
         {
-            VerboseHeapTrace(L"PRE-RESERVE: Allocing new page in PreReserved Segment \n");
+            VerboseHeapTrace(CH_WSTR("PRE-RESERVE: Allocing new page in PreReserved Segment \n"));
         }
     }
 
@@ -669,7 +669,7 @@ Page* Heap::AllocNewPage(BucketId bucket, bool canAllocInPreReservedHeapPageSegm
     ProtectPages(address, 1, pageSegment, protectFlags, PAGE_READWRITE);
 
     // Switch to allocating on a list of pages so we can do leak tracking later
-    VerboseHeapTrace(L"Allocing new page in bucket %d\n", bucket);
+    VerboseHeapTrace(CH_WSTR("Allocing new page in bucket %d\n"), bucket);
     Page* page = this->buckets[bucket].PrependNode(this->auxiliaryAllocator, address, pageSegment, bucket);
 
     if (page == nullptr)
@@ -740,9 +740,9 @@ Page* Heap::FindPageToSplit(BucketId targetBucket, bool findPreReservedHeapPages
                 Page* page = &pageInBucket;
                 if (findPreReservedHeapPages)
                 {
-                    VerboseHeapTrace(L"PRE-RESERVE: Found page for splitting in Pre Reserved Segment\n");
+                    VerboseHeapTrace(CH_WSTR("PRE-RESERVE: Found page for splitting in Pre Reserved Segment\n"));
                 }
-                VerboseHeapTrace(L"Found page to split. Moving from bucket %d to %d\n", b, targetBucket);
+                VerboseHeapTrace(CH_WSTR("Found page to split. Moving from bucket %d to %d\n"), b, targetBucket);
                 return AddPageToBucket(page, targetBucket);
             }
         }
@@ -800,7 +800,7 @@ bool Heap::FreeAllocation(Allocation* object)
 
     if (this->ShouldBeInFullList(page))
     {
-        VerboseHeapTrace(L"Recycling page 0x%p because address 0x%p of size %d was freed\n", page->address, object->address, object->size);
+        VerboseHeapTrace(CH_WSTR("Recycling page 0x%p because address 0x%p of size %d was freed\n"), page->address, object->address, object->size);
 
         // If the object being freed is equal to the page size, we're
         // going to remove it anyway so don't add it to a bucket
@@ -830,7 +830,7 @@ bool Heap::FreeAllocation(Allocation* object)
                 AutoCriticalSection autocs(&this->cs);
                 this->ReleasePages(pageAddress, 1, segment);
             }
-            VerboseHeapTrace(L"FastPath: freeing page-sized object directly\n");
+            VerboseHeapTrace(CH_WSTR("FastPath: freeing page-sized object directly\n"));
             return true;
         }
     }
@@ -850,18 +850,18 @@ bool Heap::FreeAllocation(Allocation* object)
     // Fill the old buffer with debug breaks
     CustomHeap::FillDebugBreak((BYTE *)object->address, object->size);
 
-    VerboseHeapTrace(L"Setting %d bits starting at bit %d, Free bit vector in page was ", length, index);
+    VerboseHeapTrace(CH_WSTR("Setting %d bits starting at bit %d, Free bit vector in page was "), length, index);
 #if VERBOSE_HEAP
     page->freeBitVector.DumpWord();
 #endif
-    VerboseHeapTrace(L"\n");
+    VerboseHeapTrace(CH_WSTR("\n"));
 
     page->freeBitVector.SetRange(index, length);
-    VerboseHeapTrace(L"Free bit vector in page: ", length, index);
+    VerboseHeapTrace(CH_WSTR("Free bit vector in page: "), length, index);
 #if VERBOSE_HEAP
     page->freeBitVector.DumpWord();
 #endif
-    VerboseHeapTrace(L"\n");
+    VerboseHeapTrace(CH_WSTR("\n"));
 
 #if DBG_DUMP
     this->freeObjectSize += object->size;
@@ -878,7 +878,7 @@ bool Heap::FreeAllocation(Allocation* object)
             // Templatize this to remove branches/make code more compact?
             if (&pageInBucket == page)
             {
-                VerboseHeapTrace(L"Removing page in bucket %d\n", page->currentBucket);
+                VerboseHeapTrace(CH_WSTR("Removing page in bucket %d\n"), page->currentBucket);
                 {
                     AutoCriticalSection autocs(&this->cs);
                     this->ReleasePages(page->address, 1, page->segment);
@@ -933,7 +933,7 @@ void Heap::FreePage(Page* page)
     EnsurePageWriteable(page);
     size_t freeSpace = page->freeBitVector.Count() * Page::Alignment;
 
-    VerboseHeapTrace(L"Removing page in bucket %d, freeSpace: %d\n", page->currentBucket, freeSpace);
+    VerboseHeapTrace(CH_WSTR("Removing page in bucket %d, freeSpace: %d\n"), page->currentBucket, freeSpace);
     this->ReleasePages(page->address, 1, page->segment);
 
 #if DBG_DUMP
@@ -986,7 +986,7 @@ void Heap::FreeXdata(XDataAllocation* xdata, void* segment)
             bool transfer = currentPage->segment == segment && !currentPage->HasNoSpace();
             if(transfer)
             {
-                VerboseHeapTrace(L"Recycling page 0x%p because XDATA was freed\n", currentPage->address);
+                VerboseHeapTrace(CH_WSTR("Recycling page 0x%p because XDATA was freed\n"), currentPage->address);
             }
             return transfer;
         }, this->fullPages, this->buckets);
@@ -1002,13 +1002,13 @@ void Heap::FreeXdata(XDataAllocation* xdata, void* segment)
 #if DBG_DUMP
 void Heap::DumpStats()
 {
-    HeapTrace(L"Total allocation size: %d\n", totalAllocationSize);
-    HeapTrace(L"Total free size: %d\n", freeObjectSize);
-    HeapTrace(L"Total allocations since last compact: %d\n", allocationsSinceLastCompact);
-    HeapTrace(L"Total frees since last compact: %d\n", freesSinceLastCompact);
-    HeapTrace(L"Large object count: %d\n", this->largeObjectAllocations.Count());
+    HeapTrace(CH_WSTR("Total allocation size: %d\n"), totalAllocationSize);
+    HeapTrace(CH_WSTR("Total free size: %d\n"), freeObjectSize);
+    HeapTrace(CH_WSTR("Total allocations since last compact: %d\n"), allocationsSinceLastCompact);
+    HeapTrace(CH_WSTR("Total frees since last compact: %d\n"), freesSinceLastCompact);
+    HeapTrace(CH_WSTR("Large object count: %d\n"), this->largeObjectAllocations.Count());
 
-    HeapTrace(L"Buckets: \n");
+    HeapTrace(CH_WSTR("Buckets: \n"));
     for (int i = 0; i < BucketId::NumBuckets; i++)
     {
         printf("\t%d => %u [", (1 << (i + 7)), buckets[i].Count());

+ 5 - 3
lib/Common/Memory/HeapAllocator.cpp

@@ -515,7 +515,7 @@ MemoryLeakCheck::~MemoryLeakCheck()
     {
         if (enableOutput)
         {
-            Output::Print(L"FATAL ERROR: Memory Leak Detected\n");
+            Output::Print(CH_WSTR("FATAL ERROR: Memory Leak Detected\n"));
         }
         LeakRecord * current = head;
         do
@@ -533,14 +533,16 @@ MemoryLeakCheck::~MemoryLeakCheck()
         while (current != nullptr);
         if (enableOutput)
         {
-            Output::Print(L"-------------------------------------------------------------------------------------\n");
-            Output::Print(L"Total leaked: %d bytes (%d objects)\n", leakedBytes, leakedCount);
+            Output::Print(CH_WSTR("-------------------------------------------------------------------------------------\n"));
+            Output::Print(CH_WSTR("Total leaked: %d bytes (%d objects)\n"), leakedBytes, leakedCount);
             Output::Flush();
         }
+#ifdef GENERATE_DUMP
         if (enableOutput)
         {
             Js::Throw::GenerateDump(Js::Configuration::Global.flags.DumpOnCrash, true, true);
         }
+#endif
     }
 }
 

+ 13 - 12
lib/Common/Memory/HeapBlock.cpp

@@ -505,7 +505,7 @@ SmallHeapBlockT<TBlockAttributes>::ReleasePages(Recycler * recycler)
 #if DBG
     if (this->IsLeafBlock())
     {
-        RecyclerVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Releasing leaf block pages at address 0x%p\n", address);
+        RecyclerVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Releasing leaf block pages at address 0x%p\n"), address);
     }
 #endif
 
@@ -542,6 +542,7 @@ SmallHeapBlockT<TBlockAttributes>::ReleasePages(Recycler * recycler)
 
 }
 
+#if ENABLE_BACKGROUND_PAGE_FREEING
 template <class TBlockAttributes>
 template<bool pageheap>
 void
@@ -574,6 +575,7 @@ SmallHeapBlockT<TBlockAttributes>::BackgroundReleasePagesSweep(Recycler* recycle
     this->segment = nullptr;
     this->Reset();
 }
+#endif
 
 template <class TBlockAttributes>
 void
@@ -582,7 +584,7 @@ SmallHeapBlockT<TBlockAttributes>::ReleasePagesShutdown(Recycler * recycler)
 #if DBG
     if (this->IsLeafBlock())
     {
-        RecyclerVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Releasing leaf block pages at address 0x%p\n", address);
+        RecyclerVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Releasing leaf block pages at address 0x%p\n"), address);
     }
 
 #ifdef RECYCLER_PAGE_HEAP
@@ -866,7 +868,7 @@ template <class TBlockAttributes>
 void
 SmallHeapBlockT<TBlockAttributes>::VerifyMarkBitVector()
 {
-    this->GetRecycler()->heapBlockMap.VerifyMarkCountForPages<TBlockAttributes::BitVectorCount>(this->address, TBlockAttributes::PageCount);
+    this->GetRecycler()->heapBlockMap.template VerifyMarkCountForPages<TBlockAttributes::BitVectorCount>(this->address, TBlockAttributes::PageCount);
 }
 
 template <class TBlockAttributes>
@@ -1301,7 +1303,7 @@ SmallHeapBlockT<TBlockAttributes>::Sweep(RecyclerSweep& recyclerSweep, bool queu
         {
             if (InPageHeapMode())
             {
-                PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Heap block 0x%p is empty\n", this);
+                PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Heap block 0x%p is empty\n"), this);
             }
         }
 #endif
@@ -1314,7 +1316,7 @@ SmallHeapBlockT<TBlockAttributes>::Sweep(RecyclerSweep& recyclerSweep, bool queu
     {
         if (InPageHeapMode())
         {
-            PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Heap block 0x%p is not empty, local mark count is %d, expected sweep count is %d\n", this, localMarkCount, expectSweepCount);
+            PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Heap block 0x%p is not empty, local mark count is %d, expected sweep count is %d\n"), this, localMarkCount, expectSweepCount);
         }
     }
 #endif
@@ -1982,8 +1984,7 @@ void SmallHeapBlockT<TBlockAttributes>::VerifyBumpAllocated(_In_ char * bumpAllo
                 }
                 else
                 {
-                    Recycler::VerifyCheck(false, L"Non-Finalizable block should not have finalizable objects",
-                        this->GetAddress(), &this->ObjectInfo(i));
+                    Recycler::VerifyCheck(false, CH_WSTR("Non-Finalizable block should not have finalizable objects"), this->GetAddress(), &this->ObjectInfo(i));
                 }
             }
         }
@@ -2002,7 +2003,7 @@ void SmallHeapBlockT<TBlockAttributes>::Verify(bool pendingDispose)
     char * memBlock = this->GetAddress();
     uint objectBitDelta = this->GetObjectBitDelta();
     Recycler::VerifyCheck(!pendingDispose || this->IsAnyFinalizableBlock(),
-        L"Non-finalizable block shouldn't be disposing. May have corrupted block type.",
+        CH_WSTR("Non-finalizable block shouldn't be disposing. May have corrupted block type."),
         this->GetAddress(), (void *)&this->heapBlockType);
 
     if (HasPendingDisposeObjects())
@@ -2035,7 +2036,7 @@ void SmallHeapBlockT<TBlockAttributes>::Verify(bool pendingDispose)
                 Recycler::VerifyCheck(nextFree == nullptr
                     || (nextFree >= address && nextFree < this->GetEndAddress()
                     && free->Test(GetAddressBitIndex(nextFree))),
-                    L"SmallHeapBlock memory written to after freed", memBlock, memBlock);
+                    CH_WSTR("SmallHeapBlock memory written to after freed"), memBlock, memBlock);
                 Recycler::VerifyCheckFill(memBlock + sizeof(FreeObject), this->GetObjectSize() - sizeof(FreeObject));
             }
         }
@@ -2056,7 +2057,7 @@ void SmallHeapBlockT<TBlockAttributes>::Verify(bool pendingDispose)
                     || (nextFree >= address && nextFree < this->GetEndAddress()
                     && explicitFreeBits.Test(GetAddressBitIndex(nextFree)))
                     || nextFreeHeapBlock->GetObjectSize(nextFree) == this->objectSize,
-                    L"SmallHeapBlock memory written to after freed", memBlock, memBlock);
+                    CH_WSTR("SmallHeapBlock memory written to after freed"), memBlock, memBlock);
                 recycler->VerifyCheckPadExplicitFreeList(memBlock, this->GetObjectSize());
             }
             else
@@ -2072,7 +2073,7 @@ void SmallHeapBlockT<TBlockAttributes>::Verify(bool pendingDispose)
                 }
                 else
                 {
-                    Recycler::VerifyCheck(false, L"Non-Finalizable block should not have finalizable objects",
+                    Recycler::VerifyCheck(false, CH_WSTR("Non-Finalizable block should not have finalizable objects"),
                         this->GetAddress(), &this->ObjectInfo(i));
                 }
             }
@@ -2084,7 +2085,7 @@ void SmallHeapBlockT<TBlockAttributes>::Verify(bool pendingDispose)
     if (this->IsAnyFinalizableBlock())
     {
         Recycler::VerifyCheck(this->AsFinalizableBlock<TBlockAttributes>()->finalizeCount == verifyFinalizeCount,
-            L"SmallHeapBlock finalize count mismatch", this->GetAddress(), &this->AsFinalizableBlock<TBlockAttributes>()->finalizeCount);
+            CH_WSTR("SmallHeapBlock finalize count mismatch"), this->GetAddress(), &this->AsFinalizableBlock<TBlockAttributes>()->finalizeCount);
     }
     else
     {

+ 19 - 17
lib/Common/Memory/HeapBlock.h

@@ -187,25 +187,25 @@ template <class TBlockAttributes> class SmallNormalWithBarrierHeapBlockT;
 template <class TBlockAttributes> class SmallFinalizableWithBarrierHeapBlockT;
 
 #define EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(TemplateType) \
-    template class TemplateType<SmallNormalHeapBlock>; \
-    template class TemplateType<SmallLeafHeapBlock>; \
-    template class TemplateType<SmallFinalizableHeapBlock>; \
-    template class TemplateType<SmallNormalWithBarrierHeapBlock>; \
-    template class TemplateType<SmallFinalizableWithBarrierHeapBlock>; \
-    template class TemplateType<MediumNormalHeapBlock>; \
-    template class TemplateType<MediumLeafHeapBlock>; \
-    template class TemplateType<MediumFinalizableHeapBlock>; \
-    template class TemplateType<MediumNormalWithBarrierHeapBlock>; \
-    template class TemplateType<MediumFinalizableWithBarrierHeapBlock>; \
+    template class TemplateType<Memory::SmallNormalHeapBlock>;        \
+    template class TemplateType<Memory::SmallLeafHeapBlock>; \
+    template class TemplateType<Memory::SmallFinalizableHeapBlock>; \
+    template class TemplateType<Memory::SmallNormalWithBarrierHeapBlock>; \
+    template class TemplateType<Memory::SmallFinalizableWithBarrierHeapBlock>; \
+    template class TemplateType<Memory::MediumNormalHeapBlock>; \
+    template class TemplateType<Memory::MediumLeafHeapBlock>; \
+    template class TemplateType<Memory::MediumFinalizableHeapBlock>; \
+    template class TemplateType<Memory::MediumNormalWithBarrierHeapBlock>; \
+    template class TemplateType<Memory::MediumFinalizableWithBarrierHeapBlock>; \
 
 #else
 #define EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(TemplateType) \
-    template class TemplateType<SmallNormalHeapBlock>; \
-    template class TemplateType<SmallLeafHeapBlock>; \
-    template class TemplateType<SmallFinalizableHeapBlock>; \
-    template class TemplateType<MediumNormalHeapBlock>; \
-    template class TemplateType<MediumLeafHeapBlock>; \
-    template class TemplateType<MediumFinalizableHeapBlock>; \
+    template class TemplateType<Memory::SmallNormalHeapBlock>; \
+    template class TemplateType<Memory::SmallLeafHeapBlock>; \
+    template class TemplateType<Memory::SmallFinalizableHeapBlock>; \
+    template class TemplateType<Memory::MediumNormalHeapBlock>;     \
+    template class TemplateType<Memory::MediumLeafHeapBlock>; \
+    template class TemplateType<Memory::MediumFinalizableHeapBlock>; \
 
 #endif
 
@@ -622,9 +622,11 @@ public:
     template<bool pageheap>
     void ReleasePagesSweep(Recycler * recycler);
     void ReleasePagesShutdown(Recycler * recycler);
+#if ENABLE_BACKGROUND_PAGE_FREEING
     template<bool pageheap>
     void BackgroundReleasePagesSweep(Recycler* recycler);
-
+#endif
+    
     void Reset();
 
     void EnumerateObjects(ObjectInfoBits infoBits, void (*CallBackFunction)(void * address, size_t size));

+ 4 - 1
lib/Common/Memory/HeapBlockMap.cpp

@@ -4,6 +4,9 @@
 //-------------------------------------------------------------------------------------------------------
 #include "CommonMemoryPch.h"
 
+const uint Memory::HeapBlockMap32::L1Count;
+const uint Memory::HeapBlockMap32::L2Count;
+
 #if defined(_M_X64_OR_ARM64)
 HeapBlockMap32::HeapBlockMap32(__in char * startAddress) :
     startAddress(startAddress),
@@ -106,7 +109,7 @@ HeapBlockMap32::SetHeapBlockNoCheck(void * address, uint pageCount, HeapBlock *
 
         id2 = 0;
         id1++;
-        currentPageCount = min(pageCount, L2Count);
+        currentPageCount = min(pageCount, Memory::HeapBlockMap32::L2Count);
     }
 }
 

+ 19 - 17
lib/Common/Memory/HeapBucket.cpp

@@ -34,8 +34,6 @@ HeapBucket::GetMediumBucketIndex() const
 
 namespace Memory
 {
-EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(HeapBucketT);
-}
 
 template <typename TBlockType>
 HeapBucketT<TBlockType>::HeapBucketT() :
@@ -67,6 +65,7 @@ HeapBucketT<TBlockType>::~HeapBucketT()
     DeleteEmptyHeapBlockList(this->emptyBlockList);
     Assert(this->heapBlockCount + this->newHeapBlockCount + this->emptyHeapBlockCount == 0);
 }
+};
 
 template <typename TBlockType>
 void
@@ -108,7 +107,7 @@ HeapBucketT<TBlockType>::Initialize(HeapInfo * heapInfo, uint sizeCat)
 #endif
     this->sizeCat = sizeCat;
     allocatorHead.Initialize();
-#ifdef PROFILE_RECYCLER_ALLOC
+#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY)
     allocatorHead.bucket = this;
 #endif
     this->lastExplicitFreeListAllocator = &allocatorHead;
@@ -187,7 +186,7 @@ HeapBucketT<TBlockType>::AddAllocator(TBlockAllocatorType * allocator)
     allocator->prev = &this->allocatorHead;
     allocator->next->prev = allocator;
     this->allocatorHead.next = allocator;
-#ifdef PROFILE_RECYCLER_ALLOC
+#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY)
     allocator->bucket = this;
 #endif
 }
@@ -230,7 +229,7 @@ HeapBucketT<TBlockType>::IntegrateBlock(char * blockAddress, PageSegment * segme
 #ifdef RECYCLER_PAGE_HEAP
     heapBlock->ClearPageHeap();
 #endif
-    if (!heapBlock->SetPage<false>(blockAddress, segment, recycler))
+    if (!heapBlock->template SetPage<false>(blockAddress, segment, recycler))
     {
         FreeHeapBlock(heapBlock);
         return false;
@@ -286,7 +285,8 @@ bool
 HeapBucketT<TBlockType>::HasPendingDisposeHeapBlocks() const
 {
 #ifdef RECYCLER_WRITE_BARRIER
-    return (IsFinalizableBucket || IsFinalizableWriteBarrierBucket) && ((SmallFinalizableHeapBucketT<TBlockType::HeapBlockAttributes> *)this)->pendingDisposeList != nullptr;
+    return (IsFinalizableBucket || IsFinalizableWriteBarrierBucket) &&
+    ((SmallFinalizableHeapBucketT<typename TBlockType::HeapBlockAttributes> *)this)->pendingDisposeList != nullptr;
 #else
     return IsFinalizableBucket && ((SmallFinalizableHeapBucketT<TBlockType::HeapBlockAttributes> *)this)->pendingDisposeList != nullptr;
 #endif
@@ -390,7 +390,7 @@ template <typename TBlockType>
 char *
 HeapBucketT<TBlockType>::PageHeapAlloc(Recycler * recycler, size_t sizeCat, ObjectInfoBits attributes, PageHeapMode mode, bool nothrow)
 {
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"In PageHeapAlloc [Size: 0x%x, Attributes: 0x%x]\n", sizeCat, attributes);
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("In PageHeapAlloc [Size: 0x%x, Attributes: 0x%x]\n"), sizeCat, attributes);
 
     Assert(sizeCat == this->sizeCat);
     char * memBlock = nullptr;
@@ -484,7 +484,7 @@ template <typename TBlockType>
 char *
 HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * allocator, size_t sizeCat, ObjectInfoBits attributes, bool nothrow)
 {
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"In SnailAlloc [Size: 0x%x, Attributes: 0x%x]\n", sizeCat, attributes);
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("In SnailAlloc [Size: 0x%x, Attributes: 0x%x]\n"), sizeCat, attributes);
 
     Assert(sizeCat == this->sizeCat);
     Assert((attributes & InternalObjectInfoBitMask) == attributes);
@@ -503,7 +503,7 @@ HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * a
     BOOL collected = recycler->disableCollectOnAllocationHeuristics ? FALSE : recycler->CollectNow<CollectOnAllocation>();
 #endif
 
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"TryAlloc failed, forced collection on allocation [Collected: %d]\n", collected);
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("TryAlloc failed, forced collection on allocation [Collected: %d]\n"), collected);
     if (!collected)
     {
         // We didn't collect, try to add a new heap block
@@ -515,7 +515,7 @@ HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * a
 
         // Can't even allocate a new block, we need force a collection and
         //allocate some free memory, add a new heap block again, or throw out of memory
-        AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"TryAllocFromNewHeapBlock failed, forcing in-thread collection\n");
+        AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("TryAllocFromNewHeapBlock failed, forcing in-thread collection\n"));
         recycler->CollectNow<CollectNowForceInThread>();
     }
 
@@ -527,7 +527,7 @@ HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * a
         return memBlock;
     }
 
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"SlowAlloc failed\n");
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("SlowAlloc failed\n"));
 
     // do the allocation
     memBlock = this->TryAlloc(recycler, allocator, sizeCat, attributes);
@@ -536,7 +536,7 @@ HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * a
         return memBlock;
     }
 
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"TryAlloc failed\n");
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("TryAlloc failed\n"));
     // add a heap block if there are no preallocated memory left.
     memBlock = TryAllocFromNewHeapBlock(recycler, allocator, sizeCat, attributes);
     if (memBlock != nullptr)
@@ -544,7 +544,7 @@ HeapBucketT<TBlockType>::SnailAlloc(Recycler * recycler, TBlockAllocatorType * a
         return memBlock;
     }
 
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"TryAllocFromNewHeapBlock failed- triggering OOM handler");
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("TryAllocFromNewHeapBlock failed- triggering OOM handler"));
 
     if (nothrow == false)
     {
@@ -731,7 +731,7 @@ HeapBucketT<TBlockType>::VerifyBlockConsistencyInList(TBlockType * heapBlock, Re
     }
     else if (*expectDispose)
     {
-        Assert(heapBlock->IsAnyFinalizableBlock() && heapBlock->AsFinalizableBlock<TBlockType::HeapBlockAttributes>()->IsPendingDispose());
+        Assert(heapBlock->IsAnyFinalizableBlock() && heapBlock->template AsFinalizableBlock<typename TBlockType::HeapBlockAttributes>()->IsPendingDispose());
         Assert(heapBlock->HasAnyDisposeObjects());
     }
     else
@@ -890,7 +890,7 @@ HeapBucketT<TBlockType>::SweepHeapBlockList(RecyclerSweep& recyclerSweep, TBlock
             Assert(IsFinalizableBucket);
 #endif
 
-            DebugOnly(heapBlock->AsFinalizableBlock<TBlockType::HeapBlockAttributes>()->SetIsPendingDispose());
+            DebugOnly(heapBlock->template AsFinalizableBlock<typename TBlockType::HeapBlockAttributes>()->SetIsPendingDispose());
 
             // These are the blocks that have swept finalizable object
 
@@ -1193,7 +1193,7 @@ void
 HeapBucketT<TBlockType>::VerifyHeapBlockCount(bool background)
 {
     // TODO-REFACTOR: GetNonEmptyHeapBlockCount really should be virtual
-    static_cast<SmallHeapBlockType<TBlockType::RequiredAttributes, TBlockType::HeapBlockAttributes>::BucketType *>(this)->GetNonEmptyHeapBlockCount(true);
+    static_cast<typename SmallHeapBlockType<TBlockType::RequiredAttributes, typename TBlockType::HeapBlockAttributes>::BucketType *>(this)->GetNonEmptyHeapBlockCount(true);
     if (!background)
     {
         this->GetEmptyHeapBlockCount();
@@ -1693,4 +1693,6 @@ namespace Memory
     template void HeapBucketT<MediumNormalWithBarrierHeapBlock>::SweepBucket<true>(RecyclerSweep&);
     template void HeapBucketT<MediumNormalWithBarrierHeapBlock>::SweepBucket<false>(RecyclerSweep&);
 #endif
-}
+
+    EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(HeapBucketT);
+};

+ 3 - 0
lib/Common/Memory/HeapInfo.cpp

@@ -15,6 +15,9 @@
 template <>
 __forceinline char* HeapInfo::RealAlloc<NoBit, false>(Recycler * recycler, size_t sizeCat);
 
+const uint SmallAllocationBlockAttributes::MaxSmallObjectCount;
+const uint MediumAllocationBlockAttributes::MaxSmallObjectCount;
+
 HeapInfo::ValidPointersMap<SmallAllocationBlockAttributes>  HeapInfo::smallAllocValidPointersMap;
 HeapInfo::ValidPointersMap<MediumAllocationBlockAttributes> HeapInfo::mediumAllocValidPointersMap;
 

+ 10 - 2
lib/Common/Memory/IdleDecommitPageAllocator.cpp

@@ -9,7 +9,11 @@ IdleDecommitPageAllocator::IdleDecommitPageAllocator(AllocationPolicyManager * p
     Js::ConfigFlagsTable& flagTable,
 #endif
     uint maxFreePageCount, uint maxIdleFreePageCount,
-    bool zeroPages, BackgroundPageQueue *  backgroundPageQueue, uint maxAllocPageCount) :
+    bool zeroPages,
+#if ENABLE_BACKGROUND_PAGE_FREEING 
+    BackgroundPageQueue *  backgroundPageQueue,
+#endif
+    uint maxAllocPageCount) :
 #ifdef IDLE_DECOMMIT_ENABLED
     idleDecommitTryEnterWaitFactor(0),
     hasDecommitTimer(false),
@@ -19,7 +23,11 @@ IdleDecommitPageAllocator::IdleDecommitPageAllocator(AllocationPolicyManager * p
 #ifndef JD_PRIVATE
         flagTable,
 #endif
-    type, maxFreePageCount, zeroPages, backgroundPageQueue, maxAllocPageCount),
+    type, maxFreePageCount, zeroPages,
+#if ENABLE_BACKGROUND_PAGE_FREEING
+    backgroundPageQueue,
+#endif        
+    maxAllocPageCount),
     maxIdleDecommitFreePageCount(maxIdleFreePageCount),
     maxNonIdleDecommitFreePageCount(maxFreePageCount)
 {

+ 4 - 1
lib/Common/Memory/IdleDecommitPageAllocator.h

@@ -20,7 +20,10 @@ public:
 #endif
         uint maxFreePageCount = 0,
         uint maxIdleFreePageCount = DefaultMaxFreePageCount,
-        bool zeroPages = false, BackgroundPageQueue * backgroundPageQueue = nullptr,
+        bool zeroPages = false,
+#if ENABLE_BACKGROUND_PAGE_FREEING
+        BackgroundPageQueue * backgroundPageQueue = nullptr,
+#endif
         uint maxAllocPageCount = PageAllocator::DefaultMaxAllocPageCount);
 
     void EnterIdleDecommit();

+ 8 - 8
lib/Common/Memory/LargeHeapBlock.cpp

@@ -453,7 +453,7 @@ LargeHeapBlock::AllocFreeListEntry(size_t size, ObjectInfoBits attributes, Large
 #endif
 
 #if DBG
-    LargeAllocationVerboseTrace(this->heapInfo->recycler->GetRecyclerFlagsTable(), L"Allocated object of size 0x%x in from free list entry at address 0x%p\n", size, allocObject);
+    LargeAllocationVerboseTrace(this->heapInfo->recycler->GetRecyclerFlagsTable(), CH_WSTR("Allocated object of size 0x%x in from free list entry at address 0x%p\n"), size, allocObject);
 #endif
 
     Assert(allocCount <= objectCount);
@@ -499,7 +499,7 @@ LargeHeapBlock::Alloc(size_t size, ObjectInfoBits attributes)
 
     Recycler* recycler = this->heapInfo->recycler;
 #if DBG
-    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Allocated object of size 0x%x in existing heap block at address 0x%p\n", size, allocObject);
+    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Allocated object of size 0x%x in existing heap block at address 0x%p\n"), size, allocObject);
 #endif
 
     Assert(allocCount < objectCount);
@@ -1557,7 +1557,7 @@ LargeHeapBlock::SweepObjects(Recycler * recycler)
             expectedSweepCount--;
 #endif
 #if DBG
-            LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Index %d empty\n", i);
+            LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Index %d empty\n"), i);
 #endif
             continue;
         }
@@ -1783,8 +1783,8 @@ LargeHeapBlock::Verify(Recycler * recycler)
                 if (current->headerIndex == i)
                 {
                     BYTE* objectAddress = (BYTE *)current + sizeof(LargeObjectHeader);
-                    Recycler::VerifyCheck(current->heapBlock == this, L"Invalid heap block", this, current->heapBlock);
-                    Recycler::VerifyCheck((char *)current >= lastAddress, L"LargeHeapBlock invalid object header order", this->address, current);
+                    Recycler::VerifyCheck(current->heapBlock == this, CH_WSTR("Invalid heap block"), this, current->heapBlock);
+                    Recycler::VerifyCheck((char *)current >= lastAddress, CH_WSTR("LargeHeapBlock invalid object header order"), this->address, current);
                     Recycler::VerifyCheckFill(lastAddress, (char *)current - lastAddress);
                     recycler->VerifyCheckPad(objectAddress, current->objectSize);
                     lastAddress = (char *) objectAddress + current->objectSize;
@@ -1797,16 +1797,16 @@ LargeHeapBlock::Verify(Recycler * recycler)
             continue;
         }
 
-        Recycler::VerifyCheck((char *)header >= lastAddress, L"LargeHeapBlock invalid object header order", this->address, header);
+        Recycler::VerifyCheck((char *)header >= lastAddress, CH_WSTR("LargeHeapBlock invalid object header order"), this->address, header);
         Recycler::VerifyCheckFill(lastAddress, (char *)header - lastAddress);
-        Recycler::VerifyCheck(header->objectIndex == i, L"LargeHeapBlock object index mismatch", this->address, &header->objectIndex);
+        Recycler::VerifyCheck(header->objectIndex == i, CH_WSTR("LargeHeapBlock object index mismatch"), this->address, &header->objectIndex);
         recycler->VerifyCheckPad((BYTE *)header->GetAddress(), header->objectSize);
 
         verifyFinalizeCount += ((header->GetAttributes(this->heapInfo->recycler->Cookie) & FinalizeBit) != 0);
         lastAddress = (char *)header->GetAddress() + header->objectSize;
     }
 
-    Recycler::VerifyCheck(verifyFinalizeCount == this->finalizeCount, L"LargeHeapBlock finalize object count mismatch", this->address, &this->finalizeCount);
+    Recycler::VerifyCheck(verifyFinalizeCount == this->finalizeCount, CH_WSTR("LargeHeapBlock finalize object count mismatch"), this->address, &this->finalizeCount);
 }
 #endif
 

+ 6 - 6
lib/Common/Memory/LargeHeapBucket.cpp

@@ -80,7 +80,7 @@ LargeHeapBucket::SnailAlloc(Recycler * recycler, size_t sizeCat, ObjectInfoBits
         }
         // Can't even allocate a new block, we need force a collection and
         // allocate some free memory, add a new heap block again, or throw out of memory
-        AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"LargeHeapBucket::AddLargeHeapBlock failed, forcing in-thread collection\n");
+        AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("LargeHeapBucket::AddLargeHeapBlock failed, forcing in-thread collection\n"));
         recycler->CollectNow<CollectNowForceInThread>();
     }
 
@@ -185,7 +185,7 @@ LargeHeapBucket::PageHeapAlloc(Recycler * recycler, size_t size, ObjectInfoBits
     }
 
 #if DBG
-    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Allocated new large heap block 0x%p for sizeCat 0x%x\n", heapBlock, sizeCat);
+    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Allocated new large heap block 0x%p for sizeCat 0x%x\n"), heapBlock, sizeCat);
 #endif
 
 #ifdef ENABLE_JS_ETW
@@ -275,7 +275,7 @@ LargeHeapBucket::AddLargeHeapBlock(size_t size, bool nothrow)
     uint objectCount = LargeHeapBlock::GetMaxLargeObjectCount(pageCount, size);
     LargeHeapBlock * heapBlock = LargeHeapBlock::New(address, pageCount, segment, objectCount, supportFreeList ? this : nullptr);
 #if DBG
-    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Allocated new large heap block 0x%p for sizeCat 0x%x\n", heapBlock, sizeCat);
+    LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Allocated new large heap block 0x%p for sizeCat 0x%x\n"), heapBlock, sizeCat);
 #endif
 
 #ifdef ENABLE_JS_ETW
@@ -343,7 +343,7 @@ LargeHeapBucket::TryAllocFromFreeList(Recycler * recycler, size_t sizeCat, Objec
         else
         {
 #if DBG
-            LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Unable to allocate object of size 0x%x from freelist\n", sizeCat);
+            LargeAllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Unable to allocate object of size 0x%x from freelist\n"), sizeCat);
 #endif
         }
 
@@ -562,7 +562,7 @@ LargeHeapBucket::Sweep(RecyclerSweep& recyclerSweep)
     if (this->supportFreeList)
     {
 #if DBG
-        LargeAllocationVerboseTrace(recyclerSweep.GetRecycler()->GetRecyclerFlagsTable(), L"Resetting free list for 0x%x bucket\n", this->sizeCat);
+        LargeAllocationVerboseTrace(recyclerSweep.GetRecycler()->GetRecyclerFlagsTable(), CH_WSTR("Resetting free list for 0x%x bucket\n"), this->sizeCat);
 #endif
         this->freeList = nullptr;
         this->explicitFreeList = nullptr;
@@ -716,7 +716,7 @@ LargeHeapBucket::ConstructFreelist(LargeHeapBlock * heapBlock)
         this->RegisterFreeList(freeList);
 
 #if DBG
-        LargeAllocationVerboseTrace(this->GetRecycler()->GetRecyclerFlagsTable(), L"Free list created for 0x%x bucket\n", this->sizeCat);
+        LargeAllocationVerboseTrace(this->GetRecycler()->GetRecyclerFlagsTable(), CH_WSTR("Free list created for 0x%x bucket\n"), this->sizeCat);
 #endif
     }
 

+ 29 - 10
lib/Common/Memory/LeakReport.cpp

@@ -85,10 +85,10 @@ LeakReport::StartSection(wchar_t const * msg, va_list argptr)
     nestedSectionCount++;
 
 
-    Print(L"--------------------------------------------------------------------------------\n");
+    Print(CH_WSTR("--------------------------------------------------------------------------------\n"));
     vfwprintf(file, msg, argptr);
-    Print(L"\n");
-    Print(L"--------------------------------------------------------------------------------\n");
+    Print(CH_WSTR("\n"));
+    Print(CH_WSTR("--------------------------------------------------------------------------------\n"));
 }
 
 void
@@ -131,28 +131,39 @@ LeakReport::EnsureLeakReportFile()
     }
 
     wchar_t const * filename = Js::Configuration::Global.flags.LeakReport;
-    wchar_t const * openMode = L"w+";
+    wchar_t const * openMode = CH_WSTR("w+");
     wchar_t defaultFilename[_MAX_PATH];
     if (filename == nullptr)
     {
+        // xplat-todo: Implement swprintf_s in the PAL
+#ifdef _MSC_VER
         swprintf_s(defaultFilename, L"jsleakreport-%u.txt", ::GetCurrentProcessId());
+#else
+        _snwprintf(defaultFilename, _countof(defaultFilename), CH_WSTR("jsleakreport-%u.txt"), ::GetCurrentProcessId());
+#endif
+
         filename = defaultFilename;
-        openMode = L"a+";   // append mode
+        openMode = CH_WSTR("a+");   // append mode
     }
     if (_wfopen_s(&file, filename, openMode) != 0)
     {
         openReportFileFailed = true;
         return false;
     }
-    Print(L"================================================================================\n");
-    Print(L"Chakra Leak Report - PID: %d\n", ::GetCurrentProcessId());
+    Print(CH_WSTR("================================================================================\n"));
+    Print(CH_WSTR("Chakra Leak Report - PID: %d\n"), ::GetCurrentProcessId());
+
+    // xplat-todo: Make this code cross-platform
+#if _MSC_VER
     __time64_t time_value = _time64(NULL);
     wchar_t time_string[26];
     struct tm local_time;
     _localtime64_s(&local_time, &time_value);
     _wasctime_s(time_string, &local_time);
     Print(time_string);
-    Print(L"\n");
+#endif
+    
+    Print(CH_WSTR("\n"));
     return true;
 }
 
@@ -167,7 +178,11 @@ LeakReport::LogUrl(wchar_t const * url, void * globalObject)
     urlCopy[length - 1] = L'\0';
 
     record->url = urlCopy;
+#if _MSC_VER
     record->time = _time64(NULL);
+#else
+    record->time = time(NULL);
+#endif
     record->tid = ::GetCurrentThreadId();
     record->next = nullptr;
     record->scriptEngine = nullptr;
@@ -205,12 +220,16 @@ LeakReport::DumpUrl(DWORD tid)
     {
         if (curr->tid == tid)
         {
-            wchar_t timeStr[26];
+            wchar_t timeStr[26] = CH_WSTR("00:00");
+            
+            // xplat-todo: Need to implement _wasctime_s in the PAL
+#if _MSC_VER
             struct tm local_time;
             _localtime64_s(&local_time, &curr->time);
             _wasctime_s(timeStr, &local_time);
+#endif
             timeStr[wcslen(timeStr) - 1] = 0;
-            Print(L"%s - (%p, %p) %s\n", timeStr, curr->scriptEngine, curr->globalObject, curr->url);
+            Print(CH_WSTR("%s - (%p, %p) %s\n"), timeStr, curr->scriptEngine, curr->globalObject, curr->url);
             *pprev = curr->next;
             NoCheckHeapDeleteArray(wcslen(curr->url) + 1, curr->url);
             NoCheckHeapDelete(curr);

+ 4 - 2
lib/Common/Memory/LeakReport.h

@@ -18,7 +18,7 @@ public:
         void * scriptEngine;
     private:
         wchar_t const * url;
-        __time64_t time;
+        time_t time;
         DWORD tid;
         UrlRecord * next;
 
@@ -62,8 +62,10 @@ private:
 #define STRINGIFY(x,y) STRINGIFY2(x,y)
 #define LEAK_REPORT_PRINT(msg, ...) if (Js::Configuration::Global.flags.IsEnabled(Js::LeakReportFlag)) LeakReport::Print(msg, __VA_ARGS__)
 #define AUTO_LEAK_REPORT_SECTION(flags, msg, ...) AutoLeakReportSection STRINGIFY(__autoLeakReportSection, __COUNTER__)(flags, msg, __VA_ARGS__)
+#define AUTO_LEAK_REPORT_SECTION_0(flags, msg) AutoLeakReportSection STRINGIFY(__autoLeakReportSection, __COUNTER__)(flags, msg, "")
 #else
 #define LEAK_REPORT_PRINT(msg, ...)
-#define AUTO_LEAK_REPORT_SECTION(msg, ...)
+#define AUTO_LEAK_REPORT_SECTION(flags, msg, ...)
+#define AUTO_LEAK_REPORT_SECTION_0(flags, msg)
 #endif
 }

+ 4 - 4
lib/Common/Memory/MarkContext.inl

@@ -15,7 +15,7 @@ bool MarkContext::AddMarkedObject(void * objectAddress, size_t objectSize)
 #if DBG_DUMP
     if (recycler->forceTraceMark || recycler->GetRecyclerFlagsTable().Trace.IsEnabled(Js::MarkPhase))
     {
-        Output::Print(L" %p", objectAddress);
+        Output::Print(CH_WSTR(" %p"), objectAddress);
     }
 #endif
 
@@ -39,7 +39,7 @@ bool MarkContext::AddTrackedObject(FinalizableObject * obj)
     Assert(!recycler->inPartialCollectMode);
 #endif
 
-    FAULTINJECT_MEMORY_MARK_NOTHROW(L"AddTrackedObject", 0);
+    FAULTINJECT_MEMORY_MARK_NOTHROW(CH_WSTR("AddTrackedObject"), 0);
 
     return trackStack.Push(obj);
 }
@@ -58,7 +58,7 @@ void MarkContext::ScanMemory(void ** obj, size_t byteCount)
 #if DBG_DUMP
     if (recycler->forceTraceMark || recycler->GetRecyclerFlagsTable().Trace.IsEnabled(Js::MarkPhase))
     {
-        Output::Print(L"Scanning %p(%8d): ", obj, byteCount);
+        Output::Print(CH_WSTR("Scanning %p(%8d): "), obj, byteCount);
     }
 #endif
 
@@ -81,7 +81,7 @@ void MarkContext::ScanMemory(void ** obj, size_t byteCount)
 #if DBG_DUMP
     if (recycler->forceTraceMark || recycler->GetRecyclerFlagsTable().Trace.IsEnabled(Js::MarkPhase))
     {
-        Output::Print(L"\n");
+        Output::Print(CH_WSTR("\n"));
         Output::Flush();
     }
 #endif

+ 82 - 54
lib/Common/Memory/PageAllocator.cpp

@@ -104,10 +104,10 @@ SegmentBase<T>::Initialize(DWORD allocFlags, bool excludeGuardPages)
         if (addGuardPages)
         {
 #if DBG_DUMP
-            GUARD_PAGE_TRACE(L"Number of Leading Guard Pages: %d\n", leadingGuardPageCount);
-            GUARD_PAGE_TRACE(L"Starting address of Leading Guard Pages: 0x%p\n", address);
-            GUARD_PAGE_TRACE(L"Offset of Segment Start address: 0x%p\n", this->address + (leadingGuardPageCount*AutoSystemInfo::PageSize));
-            GUARD_PAGE_TRACE(L"Starting address of Trailing Guard Pages: 0x%p\n", address + ((leadingGuardPageCount + this->segmentPageCount)*AutoSystemInfo::PageSize));
+            GUARD_PAGE_TRACE(CH_WSTR("Number of Leading Guard Pages: %d\n"), leadingGuardPageCount);
+            GUARD_PAGE_TRACE(CH_WSTR("Starting address of Leading Guard Pages: 0x%p\n"), address);
+            GUARD_PAGE_TRACE(CH_WSTR("Offset of Segment Start address: 0x%p\n"), this->address + (leadingGuardPageCount*AutoSystemInfo::PageSize));
+            GUARD_PAGE_TRACE(CH_WSTR("Starting address of Trailing Guard Pages: 0x%p\n"), address + ((leadingGuardPageCount + this->segmentPageCount)*AutoSystemInfo::PageSize));
 #endif
 #pragma warning(suppress: 6250)
             GetAllocator()->GetVirtualAllocator()->Free(address, leadingGuardPageCount*AutoSystemInfo::PageSize, MEM_DECOMMIT);
@@ -181,7 +181,7 @@ template<typename T>
 bool
 PageSegmentBase<T>::Initialize(DWORD allocFlags, bool excludeGuardPages)
 {
-    Assert(freePageCount + allocator->secondaryAllocPageCount == this->segmentPageCount || freePageCount == 0);
+    Assert(freePageCount + this->allocator->secondaryAllocPageCount == this->segmentPageCount || freePageCount == 0);
     if (__super::Initialize(allocFlags, excludeGuardPages))
     {
         if (freePageCount != 0)
@@ -263,7 +263,7 @@ PageSegmentBase<T>::AllocPages(uint pageCount, PageHeapMode pageHeapFlags)
     uint index = this->GetNextBitInFreePagesBitVector(0);
     while (index != -1)
     {
-        Assert(index < allocator->GetMaxAllocPageCount());
+        Assert(index < this->allocator->GetMaxAllocPageCount());
 
         if (GetAvailablePageCount() - index < pageCount)
         {
@@ -313,7 +313,7 @@ PageSegmentBase<TVirtualAlloc>::AllocDecommitPages(uint pageCount, T freePages,
     {
         return nullptr;
     }
-    Assert(secondaryAllocator == nullptr || secondaryAllocator->CanAllocate());
+    Assert(this->secondaryAllocator == nullptr || this->secondaryAllocator->CanAllocate());
 
     T freeAndDecommitPages = freePages;
 
@@ -323,7 +323,7 @@ PageSegmentBase<TVirtualAlloc>::AllocDecommitPages(uint pageCount, T freePages,
     uint index = freeAndDecommitPages.GetNextBit(0);
     while (index != -1)
     {
-        Assert(index < allocator->GetMaxAllocPageCount());
+        Assert(index < this->allocator->GetMaxAllocPageCount());
 
         if (GetAvailablePageCount() - index < pageCount)
         {
@@ -377,8 +377,8 @@ void
 PageSegmentBase<T>::ReleasePages(__in void * address, uint pageCount)
 {
     Assert(address >= this->address);
-    Assert(pageCount <= allocator->maxAllocPageCount);
-    Assert(((uint)(((char *)address) - this->address)) <= (allocator->maxAllocPageCount - pageCount) *  AutoSystemInfo::PageSize);
+    Assert(pageCount <= this->allocator->maxAllocPageCount);
+    Assert(((uint)(((char *)address) - this->address)) <= (this->allocator->maxAllocPageCount - pageCount) *  AutoSystemInfo::PageSize);
     Assert(!IsFreeOrDecommitted(address, pageCount));
 
     uint base = this->GetBitRangeBase(address);
@@ -448,8 +448,8 @@ void
 PageSegmentBase<T>::DecommitPages(__in void * address, uint pageCount)
 {
     Assert(address >= this->address);
-    Assert(pageCount <= allocator->maxAllocPageCount);
-    Assert(((uint)(((char *)address) - this->address)) <= (allocator->maxAllocPageCount - pageCount) * AutoSystemInfo::PageSize);
+    Assert(pageCount <= this->allocator->maxAllocPageCount);
+    Assert(((uint)(((char *)address) - this->address)) <= (this->allocator->maxAllocPageCount - pageCount) * AutoSystemInfo::PageSize);
 
     Assert(!IsFreeOrDecommitted(address, pageCount));
     uint base = this->GetBitRangeBase(address);
@@ -500,7 +500,7 @@ PageSegmentBase<T>::DecommitFreePages(size_t pageToDecommit)
 // PageAllocator
 //=============================================================================================================
 #if DBG
-#define ASSERT_THREAD() AssertMsg(ValidThreadAccess(), "Page allocation should only be used by a single thread");
+#define ASSERT_THREAD() AssertMsg(this->ValidThreadAccess(), "Page allocation should only be used by a single thread");
 #else
 #define ASSERT_THREAD()
 #endif
@@ -530,6 +530,7 @@ PageAllocatorBase<T>::GetProcessUsedBytes()
     return totalUsedBytes;
 }
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
 template<typename T>
 PageAllocatorBase<T>::BackgroundPageQueue::BackgroundPageQueue()
 {
@@ -543,6 +544,7 @@ PageAllocatorBase<T>::ZeroPageQueue::ZeroPageQueue()
     ::InitializeSListHead(&pendingZeroPageList);
     DebugOnly(this->isZeroPageQueue = true);
 }
+#endif
 
 template<typename T>
 uint
@@ -557,7 +559,11 @@ PageAllocatorBase<T>::PageAllocatorBase(AllocationPolicyManager * policyManager,
     Js::ConfigFlagsTable& flagTable,
 #endif
     PageAllocatorType type,
-    uint maxFreePageCount, bool zeroPages,  BackgroundPageQueue * backgroundPageQueue, uint maxAllocPageCount, uint secondaryAllocPageCount,
+    uint maxFreePageCount, bool zeroPages,
+#if ENABLE_BACKGROUND_PAGE_FREEING
+    BackgroundPageQueue * backgroundPageQueue,
+#endif
+    uint maxAllocPageCount, uint secondaryAllocPageCount,
     bool stopAllocationOnOutOfMemory, bool excludeGuardPages) :
     policyManager(policyManager),
 #ifndef JD_PRIVATE
@@ -567,9 +573,11 @@ PageAllocatorBase<T>::PageAllocatorBase(AllocationPolicyManager * policyManager,
     freePageCount(0),
     allocFlags(0),
     zeroPages(zeroPages),
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     queueZeroPages(false),
     hasZeroQueuedPages(false),
     backgroundPageQueue(backgroundPageQueue),
+#endif
     minFreePageCount(0),
     isUsed(false),
     idleDecommitEnterCount(1),
@@ -621,7 +629,7 @@ PageAllocatorBase<T>::~PageAllocatorBase()
     AssertMsg(this->ValidThreadAccess(), "Page allocator tear-down should only happen on the owning thread");
 
 #if DBG
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 #endif
 
     SubUsedBytes(usedBytes);
@@ -638,6 +646,7 @@ PageAllocatorBase<T>::~PageAllocatorBase()
     PageTracking::PageAllocatorDestroyed((PageAllocator*)this);
 }
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
 template<typename T>
 void
 PageAllocatorBase<T>::StartQueueZeroPage()
@@ -675,6 +684,7 @@ PageAllocatorBase<T>::HasZeroQueuedPages() const
     return hasZeroQueuedPages;
 }
 #endif
+#endif
 
 template<typename T>
 PageAllocation *
@@ -720,7 +730,7 @@ template<typename T>
 PageSegmentBase<T> *
 PageAllocatorBase<T>::AddPageSegment(DListBase<PageSegmentBase<T>>& segmentList)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
     PageSegmentBase<T> * segment = AllocPageSegment(segmentList, this, false);
 
@@ -757,7 +767,7 @@ template<typename T>
 PageSegmentBase<T> *
 HeapPageAllocator<T>::AddPageSegment(DListBase<PageSegmentBase<T>>& segmentList)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
     PageSegmentBase<T> * segment = this->AllocPageSegment(segmentList, this, false);
 
@@ -774,7 +784,7 @@ template <bool notPageAligned>
 char *
 PageAllocatorBase<T>::TryAllocFreePages(uint pageCount, PageSegmentBase<T> ** pageSegment, PageHeapMode pageHeapFlags)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
     if (this->freePageCount < pageCount)
     {
         return nullptr;
@@ -807,6 +817,7 @@ PageAllocatorBase<T>::TryAllocFreePages(uint pageCount, PageSegmentBase<T> ** pa
         }
     }
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     if (pageCount == 1 && backgroundPageQueue != nullptr)
     {
         FreePageEntry * freePage = (FreePageEntry *)::InterlockedPopEntrySList(&backgroundPageQueue->freePageList);
@@ -832,7 +843,8 @@ PageAllocatorBase<T>::TryAllocFreePages(uint pageCount, PageSegmentBase<T> ** pa
             return (char *)pages;
         }
     }
-
+#endif
+    
     return nullptr;
 }
 
@@ -894,7 +906,7 @@ template <bool notPageAligned>
 char *
 PageAllocatorBase<T>::TryAllocDecommittedPages(uint pageCount, PageSegmentBase<T> ** pageSegment, PageHeapMode pageHeapFlags)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
     typename DListBase<PageSegmentBase<T>>::EditingIterator i(&decommitSegments);
 
@@ -1168,7 +1180,7 @@ template <bool notPageAligned>
 char *
 PageAllocatorBase<T>::SnailAllocPages(uint pageCount, PageSegmentBase<T> ** pageSegment, PageHeapMode pageHeapFlags)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
     char * pages = nullptr;
     PageSegmentBase<T> * newSegment = nullptr;
@@ -1252,7 +1264,7 @@ template<typename T>
 DListBase<PageSegmentBase<T>> *
 PageAllocatorBase<T>::GetSegmentList(PageSegmentBase<T> * segment)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
     return
         (segment->IsAllDecommitted()) ? nullptr :
@@ -1283,7 +1295,7 @@ void
 PageAllocatorBase<T>::Release(void * address, size_t pageCount, void * segmentParam)
 {
     SegmentBase<T> * segment = (SegmentBase<T>*)segmentParam;
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
     Assert(segment->GetAllocator() == this);
     if (pageCount > this->maxAllocPageCount)
     {
@@ -1335,7 +1347,7 @@ PageAllocatorBase<T>::ReleasePages(__in void * address, uint pageCount, __in voi
     Assert(pageCount <= this->maxAllocPageCount);
     PageSegmentBase<T> * segment = (PageSegmentBase<T>*) segmentParam;
     ASSERT_THREAD();
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
 #if defined(RECYCLER_MEMORY_VERIFY) || defined(ARENA_MEMORY_VERIFY)
     if (disablePageReuse)
@@ -1400,13 +1412,15 @@ PageAllocatorBase<T>::ReleasePages(__in void * address, uint pageCount, __in voi
     }
     else
     {
+#if ENABLE_BACKGROUND_PAGE_ZEROING
         if (QueueZeroPages())
         {
             Assert(HasZeroPageQueue());
             AddPageToZeroQueue(address, pageCount, segment);
             return;
         }
-
+#endif
+        
         this->FillFreePages((char *)address, pageCount);
         segment->ReleasePages(address, pageCount);
         LogFreePages(pageCount);
@@ -1416,6 +1430,7 @@ PageAllocatorBase<T>::ReleasePages(__in void * address, uint pageCount, __in voi
     TransferSegment(segment, fromSegmentList);
 }
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
 template<class T>
 typename PageAllocatorBase<T>::FreePageEntry *
 PageAllocatorBase<T>::PopPendingZeroPage()
@@ -1436,6 +1451,7 @@ PageAllocatorBase<T>::AddPageToZeroQueue(__in void * address, uint pageCount, __
     ::InterlockedPushEntrySList(&(((ZeroPageQueue *)backgroundPageQueue)->pendingZeroPageList), entry);
     this->hasZeroQueuedPages = true;
 }
+#endif
 
 template<typename T>
 void
@@ -1462,6 +1478,7 @@ PageAllocatorBase<T>::TransferSegment(PageSegmentBase<T> * segment, DListBase<Pa
     }
 }
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
 template<typename T>
 void
 PageAllocatorBase<T>::BackgroundZeroQueuedPages()
@@ -1491,7 +1508,9 @@ PageAllocatorBase<T>::ZeroQueuedPages()
     }
     this->hasZeroQueuedPages = false;
 }
+#endif
 
+#if ENABLE_BACKGROUND_PAGE_FREEING
 template<typename T>
 void
 PageAllocatorBase<T>::BackgroundReleasePages(void * address, uint pageCount, PageSegmentBase<T> * segment)
@@ -1510,12 +1529,14 @@ PageAllocatorBase<T>::QueuePages(void * address, uint pageCount, PageSegmentBase
     freePageEntry->pageCount = pageCount;
     ::InterlockedPushEntrySList(&backgroundPageQueue->freePageList, freePageEntry);
 }
+#endif
 
+#if ENABLE_BACKGROUND_PAGE_FREEING
 template<typename T>
 void
 PageAllocatorBase<T>::FlushBackgroundPages()
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
     Assert(backgroundPageQueue);
 
     // We can have additional pages queued up to be zeroed out here
@@ -1546,9 +1567,10 @@ PageAllocatorBase<T>::FlushBackgroundPages()
 
     LogFreePages(newFreePages);
 
-    PAGE_ALLOC_VERBOSE_TRACE(L"New free pages: %d\n", newFreePages);
+    PAGE_ALLOC_VERBOSE_TRACE(CH_WSTR("New free pages: %d\n"), newFreePages);
     this->AddFreePageCount(newFreePages);
 }
+#endif
 
 template<typename T>
 void
@@ -1561,7 +1583,7 @@ PageAllocatorBase<T>::SuspendIdleDecommit()
     }
     Assert(this->IsIdleDecommitPageAllocator());
     ((IdleDecommitPageAllocator *)this)->cs.Enter();
-    PAGE_ALLOC_VERBOSE_TRACE(L"SuspendIdleDecommit");
+    PAGE_ALLOC_VERBOSE_TRACE_0(CH_WSTR("SuspendIdleDecommit"));
 #endif
 }
 
@@ -1575,7 +1597,7 @@ PageAllocatorBase<T>::ResumeIdleDecommit()
         return;
     }
     Assert(this->IsIdleDecommitPageAllocator());
-    PAGE_ALLOC_VERBOSE_TRACE(L"ResumeIdleDecommit");
+    PAGE_ALLOC_VERBOSE_TRACE(CH_WSTR("ResumeIdleDecommit"));
     ((IdleDecommitPageAllocator *)this)->cs.Leave();
 #endif
 }
@@ -1584,11 +1606,12 @@ template<typename T>
 void
 PageAllocatorBase<T>::DecommitNow(bool all)
 {
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
 
 #if DBG_DUMP
     size_t deleteCount = 0;
 #endif
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     // First, drain the zero page queue.
     // This will cause the free page count to be accurate
     if (HasZeroPageQueue())
@@ -1604,7 +1627,7 @@ PageAllocatorBase<T>::DecommitNow(bool all)
             {
                 break;
             }
-            PAGE_ALLOC_TRACE_AND_STATS(L"Freeing page from zero queue");
+            PAGE_ALLOC_TRACE_AND_STATS_0(CH_WSTR("Freeing page from zero queue"));
             PageSegmentBase<T> * segment = freePageEntry->segment;
             uint pageCount = freePageEntry->pageCount;
 
@@ -1652,14 +1675,15 @@ PageAllocatorBase<T>::DecommitNow(bool all)
 
         FlushBackgroundPages();
     }
-
+#endif
+    
     if (this->freePageCount == 0)
     {
         Assert(debugMinFreePageCount == 0);
         return;
     }
 
-    PAGE_ALLOC_TRACE_AND_STATS(L"Decommit now");
+    PAGE_ALLOC_TRACE_AND_STATS_0(CH_WSTR("Decommit now"));
 
     // minFreePageCount is not updated on every page allocate,
     // so we have to do a final update here.
@@ -1671,7 +1695,7 @@ PageAllocatorBase<T>::DecommitNow(bool all)
     {
         newFreePageCount = this->GetFreePageLimit();
 
-        PAGE_ALLOC_TRACE_AND_STATS(L"Full decommit");
+        PAGE_ALLOC_TRACE_AND_STATS_0(CH_WSTR("Full decommit"));
     }
     else
     {
@@ -1682,20 +1706,20 @@ PageAllocatorBase<T>::DecommitNow(bool all)
         // Ensure we don't decommit down to fewer than our partial decommit minimum
         newFreePageCount = max(newFreePageCount, static_cast<size_t>(MinPartialDecommitFreePageCount));
 
-        PAGE_ALLOC_TRACE_AND_STATS(L"Partial decommit");
+        PAGE_ALLOC_TRACE_AND_STATS_0(CH_WSTR("Partial decommit"));
     }
 
     if (newFreePageCount >= this->freePageCount)
     {
-        PAGE_ALLOC_TRACE_AND_STATS(L"No pages to decommit");
+        PAGE_ALLOC_TRACE_AND_STATS_0(CH_WSTR("No pages to decommit"));
         return;
     }
 
     size_t pageToDecommit = this->freePageCount - newFreePageCount;
 
-    PAGE_ALLOC_TRACE_AND_STATS(L"Decommit page count = %d", pageToDecommit);
-    PAGE_ALLOC_TRACE_AND_STATS(L"Free page count = %d", this->freePageCount);
-    PAGE_ALLOC_TRACE_AND_STATS(L"New free page count = %d", newFreePageCount);
+    PAGE_ALLOC_TRACE_AND_STATS(CH_WSTR("Decommit page count = %d"), pageToDecommit);
+    PAGE_ALLOC_TRACE_AND_STATS(CH_WSTR("Free page count = %d"), this->freePageCount);
+    PAGE_ALLOC_TRACE_AND_STATS(CH_WSTR("New free page count = %d"), newFreePageCount);
 
 #if DBG_DUMP
     size_t decommitCount = 0;
@@ -1790,7 +1814,7 @@ PageAllocatorBase<T>::DecommitNow(bool all)
     {
         if (CUSTOM_PHASE_STATS1(this->pageAllocatorFlagTable, Js::PageAllocatorPhase))
         {
-            Output::Print(L" After decommit now:\n");
+            Output::Print(CH_WSTR(" After decommit now:\n"));
             this->DumpStats();
         }
         Output::Flush();
@@ -1922,7 +1946,7 @@ PageAllocatorBase<T>::IntegrateSegments(DListBase<PageSegmentBase<T>>& segmentLi
 #if DBG
     size_t debugPageCount = 0;
     uint debugSegmentCount = 0;
-    DListBase<PageSegmentBase<T>>::Iterator i(&segmentList);
+    typename DListBase<PageSegmentBase<T>>::Iterator i(&segmentList);
     while (i.Next())
     {
         Assert(i.Data().GetAllocator() == this);
@@ -2086,10 +2110,10 @@ template<typename T>
 void
 PageAllocatorBase<T>::DumpStats() const
 {
-    Output::Print(L"  Full/Partial/Empty/Decommit/Large Segments: %4d %4d %4d %4d %4d\n",
+    Output::Print(CH_WSTR("  Full/Partial/Empty/Decommit/Large Segments: %4d %4d %4d %4d %4d\n"),
         fullSegments.Count(), segments.Count(), emptySegments.Count(), decommitSegments.Count(), largeSegments.Count());
 
-    Output::Print(L"  Free/Decommit/Min Free Pages              : %4d %4d %4d\n",
+    Output::Print(CH_WSTR("  Free/Decommit/Min Free Pages              : %4d %4d %4d\n"),
         this->freePageCount, this->decommitPageCount, this->minFreePageCount);
 }
 #endif
@@ -2099,28 +2123,30 @@ template<typename T>
 void
 PageAllocatorBase<T>::Check()
 {
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     Assert(!this->HasZeroQueuedPages());
+#endif
     size_t currentFreePageCount = 0;
 
-    DListBase<PageSegmentBase<T>>::Iterator segmentsIterator(&segments);
+    typename DListBase<PageSegmentBase<T>>::Iterator segmentsIterator(&segments);
     while (segmentsIterator.Next())
     {
         currentFreePageCount += segmentsIterator.Data().GetFreePageCount();
     }
 
-    DListBase<PageSegmentBase<T>>::Iterator fullSegmentsIterator(&fullSegments);
+    typename DListBase<PageSegmentBase<T>>::Iterator fullSegmentsIterator(&fullSegments);
     while (fullSegmentsIterator.Next())
     {
         currentFreePageCount += fullSegmentsIterator.Data().GetFreePageCount();
     }
 
-    DListBase<PageSegmentBase<T>>::Iterator emptySegmentsIterator(&emptySegments);
+    typename DListBase<PageSegmentBase<T>>::Iterator emptySegmentsIterator(&emptySegments);
     while (emptySegmentsIterator.Next())
     {
         currentFreePageCount += emptySegmentsIterator.Data().GetFreePageCount();
     }
 
-    DListBase<PageSegmentBase<T>>::Iterator decommitSegmentsIterator(&decommitSegments);
+    typename DListBase<PageSegmentBase<T>>::Iterator decommitSegmentsIterator(&decommitSegments);
     while (decommitSegmentsIterator.Next())
     {
         currentFreePageCount += decommitSegmentsIterator.Data().GetFreePageCount();
@@ -2137,7 +2163,9 @@ HeapPageAllocator<T>::HeapPageAllocator(AllocationPolicyManager * policyManager,
         PageAllocatorType_CustomHeap,
         /*maxFreePageCount*/ 0,
         /*zeroPages*/ false,
+#if ENABLE_BACKGROUND_PAGE_FREEING || ENABLE_BACKGROUND_PAGE_ZEROING
         /*zeroPageQueue*/ nullptr,
+#endif
         /*maxAllocPageCount*/ allocXdata ? (Base::DefaultMaxAllocPageCount - XDATA_RESERVE_PAGE_COUNT) : Base::DefaultMaxAllocPageCount,
         /*secondaryAllocPageCount=*/ allocXdata ? XDATA_RESERVE_PAGE_COUNT : 0,
         /*stopAllocationOnOutOfMemory*/ false,
@@ -2202,7 +2230,7 @@ HeapPageAllocator<T>::ProtectPages(__in char* address, size_t pageCount, __in vo
     Assert(address >= segment->GetAddress());
     Assert(((uint)(((char *)address) - segment->GetAddress()) <= (segment->GetPageCount() - pageCount) * AutoSystemInfo::PageSize));
 
-    if (IsPageSegment(segment))
+    if (this->IsPageSegment(segment))
     {
         PageSegmentBase<T> * pageSegment = static_cast<PageSegmentBase<T>*>(segment);
         AssertMsg(pageCount <= MAXUINT32, "PageSegment should always be smaller than 4G pages");
@@ -2213,7 +2241,7 @@ HeapPageAllocator<T>::ProtectPages(__in char* address, size_t pageCount, __in vo
 #if DBG_DUMP || defined(RECYCLER_TRACE)
     if (this->pageAllocatorFlagTable.IsEnabled(Js::TraceProtectPagesFlag))
     {
-        Output::Print(L"VirtualProtect(0x%p, %d, %d, %d)\n", address, pageCount, pageCount * AutoSystemInfo::PageSize, dwVirtualProtectFlags);
+        Output::Print(CH_WSTR("VirtualProtect(0x%p, %d, %d, %d)\n"), address, pageCount, pageCount * AutoSystemInfo::PageSize, dwVirtualProtectFlags);
     }
 #endif
 
@@ -2268,7 +2296,7 @@ HeapPageAllocator<T>::TrackDecommittedPages(void * address, uint pageCount, __in
 {
     PageSegmentBase<T> * segment = (PageSegmentBase<T>*)segmentParam;
     ASSERT_THREAD();
-    Assert(!HasMultiThreadAccess());
+    Assert(!this->HasMultiThreadAccess());
     Assert(pageCount <= this->maxAllocPageCount);
 
     DListBase<PageSegmentBase<T>> * fromSegmentList = this->GetSegmentList(segment);
@@ -2299,9 +2327,9 @@ bool HeapPageAllocator<T>::AllocSecondary(void* segmentParam, ULONG_PTR function
         // If no more XDATA allocations can take place.
         if (success && !pageSegment->CanAllocSecondary() && fromSegmentList != &this->fullSegments)
         {
-            AssertMsg(GetSegmentList(pageSegment) == &fullSegments, "This segment should now be in the full list if it can't allocate secondary");
+            AssertMsg(this->GetSegmentList(pageSegment) == &this->fullSegments, "This segment should now be in the full list if it can't allocate secondary");
 
-            OUTPUT_TRACE(Js::EmitterPhase, L"XDATA Wasted pages:%u\n", pageSegment->GetFreePageCount());
+            OUTPUT_TRACE(Js::EmitterPhase, CH_WSTR("XDATA Wasted pages:%u\n"), pageSegment->GetFreePageCount());
             this->freePageCount -= pageSegment->GetFreePageCount();
             fromSegmentList->MoveElementTo(pageSegment, &this->fullSegments);
 #if DBG
@@ -2341,10 +2369,10 @@ void HeapPageAllocator<T>::ReleaseSecondary(const SecondaryAllocation& allocatio
 
         if (fromList != toList)
         {
-            OUTPUT_TRACE(Js::EmitterPhase, L"XDATA reclaimed pages:%u\n", pageSegment->GetFreePageCount());
+            OUTPUT_TRACE(Js::EmitterPhase, CH_WSTR("XDATA reclaimed pages:%u\n"), pageSegment->GetFreePageCount());
             fromList->MoveElementTo(pageSegment, toList);
 
-            AssertMsg(fromList == &fullSegments, "Releasing a secondary allocator should make a state change only if the segment was originally in the full list");
+            AssertMsg(fromList == &this->fullSegments, "Releasing a secondary allocator should make a state change only if the segment was originally in the full list");
             AssertMsg(pageSegment->CanAllocSecondary(), "It should be allocate secondary now");
             this->AddFreePageCount(pageSegment->GetFreePageCount());
         }

+ 44 - 13
lib/Common/Memory/PageAllocator.h

@@ -25,22 +25,25 @@ typedef void* FunctionTableHandle;
 
 #define PAGE_ALLOC_TRACE(format, ...) PAGE_ALLOC_TRACE_EX(false, false, format, __VA_ARGS__)
 #define PAGE_ALLOC_VERBOSE_TRACE(format, ...) PAGE_ALLOC_TRACE_EX(true, false, format, __VA_ARGS__)
+#define PAGE_ALLOC_VERBOSE_TRACE_0(format) PAGE_ALLOC_TRACE_EX(true, false, format, "")
 
 #define PAGE_ALLOC_TRACE_AND_STATS(format, ...) PAGE_ALLOC_TRACE_EX(false, true, format, __VA_ARGS__)
+#define PAGE_ALLOC_TRACE_AND_STATS_0(format) PAGE_ALLOC_TRACE_EX(false, true, format, "")
 #define PAGE_ALLOC_VERBOSE_TRACE_AND_STATS(format, ...) PAGE_ALLOC_TRACE_EX(true, true, format, __VA_ARGS__)
+#define PAGE_ALLOC_VERBOSE_TRACE_AND_STATS_0(format) PAGE_ALLOC_TRACE_EX(true, true, format, "")
 
-#define PAGE_ALLOC_TRACE_EX(verbose, stats, format, ...) \
+#define PAGE_ALLOC_TRACE_EX(verbose, stats, format, ...)                \
     if (this->pageAllocatorFlagTable.Trace.IsEnabled(Js::PageAllocatorPhase)) \
     { \
         if (!verbose || this->pageAllocatorFlagTable.Verbose) \
         {   \
-            Output::Print(L"%p : %p> PageAllocator(%p): ", GetCurrentThreadContextId(), ::GetCurrentThreadId(), this); \
+            Output::Print(CH_WSTR("%p : %p> PageAllocator(%p): "), GetCurrentThreadContextId(), ::GetCurrentThreadId(), this); \
             if (debugName != nullptr) \
             { \
-                Output::Print(L"[%s] ", this->debugName); \
+                Output::Print(CH_WSTR("[%s] "), this->debugName);       \
             } \
             Output::Print(format, __VA_ARGS__);         \
-            Output::Print(L"\n"); \
+            Output::Print(CH_WSTR("\n"));                               \
             if (stats && this->pageAllocatorFlagTable.Stats.IsEnabled(Js::PageAllocatorPhase)) \
             { \
                 this->DumpStats();         \
@@ -51,9 +54,13 @@ typedef void* FunctionTableHandle;
 #else
 #define PAGE_ALLOC_TRACE(format, ...)
 #define PAGE_ALLOC_VERBOSE_TRACE(format, ...)
+#define PAGE_ALLOC_VERBOSE_TRACE_0(format)
 
 #define PAGE_ALLOC_TRACE_AND_STATS(format, ...)
 #define PAGE_ALLOC_VERBOSE_TRACE_AND_STATS(format, ...)
+#define PAGE_ALLOC_TRACE_AND_STATS_0(format)
+#define PAGE_ALLOC_VERBOSE_TRACE_AND_STATS_0(format)
+
 #endif
 
 #ifdef _M_X64
@@ -272,7 +279,7 @@ public:
 
     bool IsFreeOrDecommitted(void* address, uint pageCount) const
     {
-        Assert(IsInSegment(address));
+        Assert(this->IsInSegment(address));
 
         uint base = GetBitRangeBase(address);
         return this->TestRangeInDecommitPagesBitVector(base, pageCount) || this->TestRangeInFreePagesBitVector(base, pageCount);
@@ -280,7 +287,7 @@ public:
 
     bool IsFreeOrDecommitted(void* address) const
     {
-        Assert(IsInSegment(address));
+        Assert(this->IsInSegment(address));
 
         uint base = GetBitRangeBase(address);
         return this->TestInDecommitPagesBitVector(base) || this->TestInFreePagesBitVector(base);
@@ -396,6 +403,7 @@ public:
         Assert(false);
     }
 
+#if ENABLE_BACKGROUND_PAGE_FREEING
     struct BackgroundPageQueue
     {
         BackgroundPageQueue();
@@ -407,13 +415,17 @@ public:
         bool isZeroPageQueue;
 #endif
     };
+
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     struct ZeroPageQueue : BackgroundPageQueue
     {
         ZeroPageQueue();
 
         SLIST_HEADER pendingZeroPageList;
     };
-
+#endif
+#endif
+    
     PageAllocatorBase(AllocationPolicyManager * policyManager,
 #ifndef JD_PRIVATE
         Js::ConfigFlagsTable& flags = Js::Configuration::Global.flags,
@@ -421,7 +433,9 @@ public:
         PageAllocatorType type = PageAllocatorType_Max,
         uint maxFreePageCount = DefaultMaxFreePageCount,
         bool zeroPages = false,
+#if ENABLE_BACKGROUND_PAGE_FREEING
         BackgroundPageQueue * backgroundPageQueue = nullptr,
+#endif
         uint maxAllocPageCount = DefaultMaxAllocPageCount,
         uint secondaryAllocPageCount = DefaultSecondaryAllocPageCount,
         bool stopAllocationOnOutOfMemory = false,
@@ -461,19 +475,25 @@ public:
     char * AllocPagesPageAligned(uint pageCount, PageSegmentBase<TVirtualAlloc> ** pageSegment, PageHeapMode pageHeapFlags);
 
     void ReleasePages(__in void * address, uint pageCount, __in void * pageSegment);
+#if ENABLE_BACKGROUND_PAGE_FREEING
     void BackgroundReleasePages(void * address, uint pageCount, PageSegmentBase<TVirtualAlloc> * pageSegment);
-
+#endif
+    
     // Decommit
     void DecommitNow(bool all = true);
     void SuspendIdleDecommit();
     void ResumeIdleDecommit();
 
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     void StartQueueZeroPage();
     void StopQueueZeroPage();
     void ZeroQueuedPages();
     void BackgroundZeroQueuedPages();
+#endif
+#if ENABLE_BACKGROUND_PAGE_FREEING
     void FlushBackgroundPages();
-
+#endif
+    
     bool DisableAllocationOutOfMemory() const { return disableAllocationOutOfMemory; }
     void ResetDisableAllocationOutOfMemory() { disableAllocationOutOfMemory = false; }
 
@@ -552,11 +572,16 @@ protected:
     static PageSegmentBase<TVirtualAlloc> * AllocPageSegment(DListBase<PageSegmentBase<TVirtualAlloc>>& segmentList, PageAllocatorBase<TVirtualAlloc> * pageAllocator, bool external);
 
     // Zero Pages
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     void AddPageToZeroQueue(__in void * address, uint pageCount, __in PageSegmentBase<TVirtualAlloc> * pageSegment);
     bool HasZeroPageQueue() const;
+#endif
+    
     bool ZeroPages() const { return zeroPages; }
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     bool QueueZeroPages() const { return queueZeroPages; }
-
+#endif
+    
     FreePageEntry * PopPendingZeroPage();
 #if DBG
     void Check();
@@ -589,11 +614,15 @@ protected:
 #endif
 
     // zero pages
-    BackgroundPageQueue * backgroundPageQueue;
     bool zeroPages;
+#if ENABLE_BACKGROUND_PAGE_FREEING
+    BackgroundPageQueue * backgroundPageQueue;
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     bool queueZeroPages;
     bool hasZeroQueuedPages;
-
+#endif
+#endif
+    
     // Idle Decommit
     bool isUsed;
     size_t minFreePageCount;
@@ -638,8 +667,10 @@ protected:
 private:
     uint GetSecondaryAllocPageCount() const { return this->secondaryAllocPageCount; }
     void IntegrateSegments(DListBase<PageSegmentBase<TVirtualAlloc>>& segmentList, uint segmentCount, size_t pageCount);
+#if ENABLE_BACKGROUND_PAGE_FREEING
     void QueuePages(void * address, uint pageCount, PageSegmentBase<TVirtualAlloc> * pageSegment);
-
+#endif
+    
     template <bool notPageAligned>
     char* AllocPagesInternal(uint pageCount, PageSegmentBase<TVirtualAlloc> ** pageSegment, PageHeapMode pageHeapModeFlags = PageHeapMode::PageHeapModeOff);
 

+ 6 - 1
lib/Common/Memory/PagePool.h

@@ -62,7 +62,12 @@ private:
 
 public:
     PagePool(Js::ConfigFlagsTable& flagsTable) :
-        pageAllocator(NULL, flagsTable, PageAllocatorType_GCThread, PageAllocator::DefaultMaxFreePageCount, false, nullptr, PageAllocator::DefaultMaxAllocPageCount, 0, true),
+        pageAllocator(NULL, flagsTable, PageAllocatorType_GCThread,
+            PageAllocator::DefaultMaxFreePageCount, false,
+#if ENABLE_BACKGROUND_PAGE_ZEROING
+            nullptr,
+#endif
+            PageAllocator::DefaultMaxAllocPageCount, 0, true),
         freePageList(nullptr),
         reservedPageList(nullptr)
     {

File diff suppressed because it is too large
+ 220 - 203
lib/Common/Memory/Recycler.cpp


+ 15 - 2
lib/Common/Memory/Recycler.h

@@ -9,7 +9,7 @@
 namespace Js
 {
     class Profiler;
-    enum Phase;
+    enum Phase: unsigned short;
 };
 
 namespace JsUtil
@@ -17,7 +17,9 @@ namespace JsUtil
     class ThreadService;
 };
 
+#ifdef STACK_BACK_TRACE
 class StackBackTraceNode;
+#endif
 class ScriptEngineBase;
 class JavascriptThreadService;
 
@@ -699,15 +701,24 @@ private:
 #if defined(CHECK_MEMORY_LEAK) || defined(LEAK_REPORT)
     struct PinRecord
     {
+#ifdef STACK_BACK_TRACE
         PinRecord() : refCount(0), stackBackTraces(nullptr) {}
+#else
+        PinRecord() : refCount(0) {}
+#endif
         PinRecord& operator=(uint newRefCount)
         {
-            Assert(stackBackTraces == nullptr); Assert(newRefCount == 0); refCount = 0; return *this;
+#ifdef STACK_BACK_TRACE
+            Assert(stackBackTraces == nullptr);
+#endif
+            Assert(newRefCount == 0); refCount = 0; return *this;
         }
         PinRecord& operator++() { ++refCount; return *this; }
         PinRecord& operator--() { --refCount; return *this; }
         operator uint() const { return refCount; }
+#ifdef STACK_BACK_TRACE
         StackBackTraceNode * stackBackTraces;
+#endif
     private:
         uint refCount;
     };
@@ -722,8 +733,10 @@ private:
     uint weakReferenceCleanupId;
 
     void * transientPinnedObject;
+#ifdef STACK_BACK_TRACE
 #if defined(CHECK_MEMORY_LEAK) || defined(LEAK_REPORT)
     StackBackTrace * transientPinnedObjectStackBackTrace;
+#endif
 #endif
 
     struct GuestArenaAllocator : public ArenaAllocator

+ 6 - 2
lib/Common/Memory/Recycler.inl

@@ -157,7 +157,7 @@ Recycler::AllocWithAttributesInlined(size_t size)
     }
 
 #ifdef RECYCLER_WRITE_BARRIER
-    SwbVerboseTrace(this->GetRecyclerFlagsTable(), L"Allocated SWB memory: 0x%p\n", memBlock);
+    SwbVerboseTrace(this->GetRecyclerFlagsTable(), CH_WSTR("Allocated SWB memory: 0x%p\n"), memBlock);
 
 #pragma prefast(suppress:6313, "attributes is a template parameter and can be 0")
     if (attributes & (NewTrackBit))
@@ -335,13 +335,15 @@ __inline RecyclerWeakReference<T>* Recycler::CreateWeakReferenceHandle(T* pStron
     // The entry returned is recycler-allocated memory
     RecyclerWeakReference<T>* weakRef = (RecyclerWeakReference<T>*) this->weakReferenceMap.Add((char*) pStrongReference, this);
 #if DBG
+#if ENABLE_RECYCLER_TYPE_TRACKING
     if (weakRef->typeInfo == nullptr)
     {
         weakRef->typeInfo = &typeid(T);
 #ifdef TRACK_ALLOC
         TrackAllocWeakRef(weakRef);
 #endif
-}
+    }
+#endif
 #endif
     return weakRef;
 }
@@ -355,9 +357,11 @@ __inline bool Recycler::FindOrCreateWeakReferenceHandle(T* pStrongReference, Rec
 #if DBG
     if (!ret)
     {
+#if ENABLE_RECYCLER_TYPE_TRACKING
         (*ppWeakRef)->typeInfo = &typeid(T);
 #ifdef TRACK_ALLOC
         TrackAllocWeakRef(*ppWeakRef);
+#endif
 #endif
     }
 #endif

+ 2 - 2
lib/Common/Memory/RecyclerObjectDumper.cpp

@@ -56,14 +56,14 @@ RecyclerObjectDumper::DumpObject(type_info const * typeinfo, bool isArray, void
 {
     if (typeinfo == nullptr)
     {
-        Output::Print(L"Address %p", objectAddress);
+        Output::Print(CH_WSTR("Address %p"), objectAddress);
     }
     else
     {
         DumpFunction dumpFunction;
         if (dumpFunctionMap == nullptr || !dumpFunctionMap->TryGetValue(typeinfo, &dumpFunction) || !dumpFunction(typeinfo, isArray, objectAddress))
         {
-            Output::Print(isArray? L"%S[] %p" : L"%S %p", typeinfo->name(), objectAddress);
+            Output::Print(isArray? CH_WSTR("%S[] %p") : CH_WSTR("%S %p"), typeinfo->name(), objectAddress);
         }
     }
 }

+ 6 - 6
lib/Common/Memory/RecyclerObjectGraphDumper.cpp

@@ -35,7 +35,7 @@ void RecyclerObjectGraphDumper::BeginDumpObject(wchar_t const * name, void * add
 {
     Assert(dumpObjectName == nullptr);
     Assert(dumpObject == nullptr);
-    swprintf_s(tempObjectName, _countof(tempObjectName), L"%s %p", name, address);
+    swprintf_s(tempObjectName, _countof(tempObjectName), CH_WSTR("%s %p"), name, address);
     dumpObjectName = tempObjectName;
 }
 
@@ -79,10 +79,10 @@ void RecyclerObjectGraphDumper::DumpObjectReference(void * objectAddress, bool r
             if (!this->param->dumpReferenceFunc(this->dumpObjectName, this->dumpObject, objectAddress))
                 return;
         }
-        Output::Print(L"\"");
+        Output::Print(CH_WSTR("\""));
         if (this->dumpObjectName)
         {
-            Output::Print(L"%s", this->dumpObjectName);
+            Output::Print(CH_WSTR("%s"), this->dumpObjectName);
         }
         else
         {
@@ -90,14 +90,14 @@ void RecyclerObjectGraphDumper::DumpObjectReference(void * objectAddress, bool r
 #ifdef PROFILE_RECYCLER_ALLOC
             RecyclerObjectDumper::DumpObject(this->dumpObjectTypeInfo, this->dumpObjectIsArray, this->dumpObject);
 #else
-            Output::Print(L"Address %p", objectAddress);
+            Output::Print(CH_WSTR("Address %p"), objectAddress);
 #endif
         }
 
-        Output::Print(remark? L"\" => \"" : L"\" -> \"");
+        Output::Print(remark? CH_WSTR("\" => \"") : CH_WSTR("\" -> \""));
         recycler->DumpObjectDescription(objectAddress);
 
-        Output::Print(L"\"\n");
+        Output::Print(CH_WSTR("\"\n"));
     }
 }
 #endif

+ 1 - 1
lib/Common/Memory/RecyclerObjectGraphDumper.h

@@ -49,7 +49,7 @@ public:
 #define DUMP_OBJECT_REFERENCE(recycler, address) if (recycler->objectGraphDumper != nullptr) { recycler->objectGraphDumper->DumpObjectReference(address, false); }
 #define DUMP_OBJECT_REFERENCE_REMARK(recycler, address) if (recycler->objectGraphDumper != nullptr && recycler->IsValidObject(address)) { recycler->objectGraphDumper->DumpObjectReference(address, true); }
 #define END_DUMP_OBJECT(recycler)  if (recycler->objectGraphDumper != nullptr)  { recycler->objectGraphDumper->EndDumpObject(); } }
-#define DUMP_IMPLICIT_ROOT(recycler, address) BEGIN_DUMP_OBJECT(recycler, L"Implicit Root"); DUMP_OBJECT_REFERENCE(recycler, address); END_DUMP_OBJECT(recycler);
+#define DUMP_IMPLICIT_ROOT(recycler, address) BEGIN_DUMP_OBJECT(recycler, CH_WSTR("Implicit Root")); DUMP_OBJECT_REFERENCE(recycler, address); END_DUMP_OBJECT(recycler);
 #else
 #define BEGIN_DUMP_OBJECT(recycler, address)
 #define BEGIN_DUMP_OBJECT_ADDRESS(name, address)

+ 5 - 1
lib/Common/Memory/RecyclerPageAllocator.cpp

@@ -15,7 +15,11 @@ RecyclerPageAllocator::RecyclerPageAllocator(Recycler* recycler, AllocationPolic
         flagTable,
 #endif
         0, maxFreePageCount,
-        true, &zeroPageQueue, maxAllocPageCount)
+        true,
+#if ENABLE_BACKGROUND_PAGE_ZEROING
+        &zeroPageQueue,
+#endif
+        maxAllocPageCount)
 {
     this->recycler = recycler;
 }

+ 3 - 0
lib/Common/Memory/RecyclerPageAllocator.h

@@ -36,7 +36,10 @@ private:
     static size_t GetAllWriteWatchPageCount(DListBase<T> * segmentList);
 #endif
 #endif
+#if ENABLE_BACKGROUND_PAGE_ZEROING
     ZeroPageQueue zeroPageQueue;
+#endif
+    
     Recycler* recycler;
 
     bool IsMemProtectMode();

+ 12 - 0
lib/Common/Memory/RecyclerPointers.h

@@ -146,6 +146,13 @@ private:
 };
 }
 
+#if USING_PAL_MINMAX
+#pragma push_macro("min")
+#pragma push_macro("max")
+#undef min
+#undef max
+#endif
+
 template<class T> inline
 const T& min(const T& a, const NoWriteBarrierField<T>& b) { return a < b ? a : b; }
 
@@ -165,6 +172,11 @@ const T& max(const T& a, const NoWriteBarrierField<T>& b) { return a > b ? a : b
 template<class T> inline
 const T& max(const NoWriteBarrierField<T>& a, const NoWriteBarrierField<T>& b) { return a > b ? a : b; }
 
+#if USING_PAL_MINMAX
+#pragma pop_macro("min")
+#pragma pop_macro("max")
+#endif
+
 // Disallow memcpy, memmove of WriteBarrierPtr
 
 template <typename T>

+ 5 - 0
lib/Common/Memory/RecyclerWeakReference.h

@@ -40,7 +40,10 @@ protected:
     SmallHeapBlock * weakRefHeapBlock;
     RecyclerWeakReferenceBase* next;
 #if DBG
+#if ENABLE_RECYCLER_TYPE_TRACKING
     type_info const * typeInfo;
+#endif
+    
 #if defined TRACK_ALLOC && defined(PERF_COUNTERS)
     PerfCounter::Counter * counter;
 #endif
@@ -356,7 +359,9 @@ private:
         AddEntry(entry, &buckets[targetBucket]);
         count++;
 #if DBG
+#if ENABLE_RECYCLER_TYPE_TRACKING
         entry->typeInfo = nullptr;
+#endif
 #if defined(TRACK_ALLOC) && defined(PERF_COUNTERS)
         entry->counter = nullptr;
 #endif

+ 4 - 4
lib/Common/Memory/RecyclerWriteBarrierManager.cpp

@@ -46,7 +46,7 @@ X64WriteBarrierCardTableManager::OnThreadInit()
     // We page in the card table sections for the current threads stack reservation
     // So any writes to stack allocated vars can also have the write barrier set
 
-    // xplat-dodo: Replace this on Windows too with GetCurrentThreadStackBounds
+    // xplat-todo: Replace this on Windows too with GetCurrentThreadStackBounds
 #ifdef _WIN32
     NT_TIB* teb = (NT_TIB*) ::NtCurrentTeb();
 
@@ -276,7 +276,7 @@ RecyclerWriteBarrierManager::WriteBarrier(void * address)
     // Global to process, use global configuration here
     if (PHASE_VERBOSE_TRACE1(Js::SWBPhase))
     {
-        Output::Print(L"Writing to 0x%p (CIndex: %u)\n", address, index);
+        Output::Print(CH_WSTR("Writing to 0x%p (CIndex: %u)\n"), address, index);
     }
 #endif
 }
@@ -291,7 +291,7 @@ RecyclerWriteBarrierManager::WriteBarrier(void * address, size_t ptrCount)
     uintptr_t endIndex = GetCardTableIndex(endAddress);
     Assert(startIndex <= endIndex);
     memset(cardTable + startIndex, 1, endIndex - startIndex);
-    GlobalSwbVerboseTrace(L"Writing to 0x%p (CIndex: %u-%u)\n", address, startIndex, endIndex);
+    GlobalSwbVerboseTrace(CH_WSTR("Writing to 0x%p (CIndex: %u-%u)\n"), address, startIndex, endIndex);
 #else
     uint bitShift = (((uint)address) >> s_BitArrayCardTableShift);
     uint bitMask = 0xFFFFFFFF << bitShift;
@@ -345,7 +345,7 @@ RecyclerWriteBarrierManager::ResetWriteBarrier(void * address, size_t pageCount)
     // Global to process, use global configuration here
     if (PHASE_VERBOSE_TRACE1(Js::SWBPhase))
     {
-        Output::Print(L"Resetting %u pages at CIndex: %u\n", address, pageCount, cardIndex);
+        Output::Print(CH_WSTR("Resetting %u pages at CIndex: %u\n"), address, pageCount, cardIndex);
     }
 #endif
 }

+ 2 - 0
lib/Common/Memory/SmallBlockDeclarations.inl

@@ -9,8 +9,10 @@
 
 template void SmallHeapBlockT<TBlockTypeAttributes>::ReleasePages<true>(Recycler * recycler);
 template void SmallHeapBlockT<TBlockTypeAttributes>::ReleasePages<false>(Recycler * recycler);
+#if ENABLE_BACKGROUND_PAGE_FREEING
 template void SmallHeapBlockT<TBlockTypeAttributes>::BackgroundReleasePagesSweep<true>(Recycler* recycler);
 template void SmallHeapBlockT<TBlockTypeAttributes>::BackgroundReleasePagesSweep<false>(Recycler* recycler);
+#endif
 template void SmallHeapBlockT<TBlockTypeAttributes>::ReleasePagesSweep<true>(Recycler * recycler);
 template void SmallHeapBlockT<TBlockTypeAttributes>::ReleasePagesSweep<false>(Recycler * recycler);
 template BOOL SmallHeapBlockT<TBlockTypeAttributes>::ReassignPages<true>(Recycler * recycler);

+ 4 - 4
lib/Common/Memory/SmallFinalizableHeapBlock.cpp

@@ -318,7 +318,7 @@ SmallFinalizableHeapBlockT<TBlockAttributes>::TransferDisposedObjects()
     // So just update the free object head.
     this->lastFreeObjectHead = this->freeObjectList;
 
-    RECYCLER_SLOW_CHECK(CheckFreeBitVector(true));
+    RECYCLER_SLOW_CHECK(this->CheckFreeBitVector(true));
 }
 
 template <class TBlockAttributes>
@@ -334,7 +334,7 @@ SmallFinalizableHeapBlockT<TBlockAttributes>::AddDisposedObjectFreeBitVector(Sma
         while (true)
         {
             uint bitIndex = this->GetAddressBitIndex(freeObject);
-            Assert(IsValidBitIndex(bitIndex));
+            Assert(this->IsValidBitIndex(bitIndex));
 
             // not allocable yet
             Assert(!this->GetDebugFreeBitVector()->Test(bitIndex));
@@ -424,7 +424,7 @@ SmallFinalizableHeapBlockT<TBlockAttributes>::CheckDisposedObjectFreeBitVector()
         while (true)
         {
             uint bitIndex = this->GetAddressBitIndex(freeObject);
-            Assert(IsValidBitIndex(bitIndex));
+            Assert(this->IsValidBitIndex(bitIndex));
             Assert(!this->GetDebugFreeBitVector()->Test(bitIndex));
             Assert(free->Test(bitIndex));
             verifyFreeCount++;
@@ -443,7 +443,7 @@ template <class TBlockAttributes>
 bool
 SmallFinalizableHeapBlockT<TBlockAttributes>::GetFreeObjectListOnAllocator(FreeObject ** freeObjectList)
 {
-    return GetFreeObjectListOnAllocatorImpl<SmallFinalizableHeapBlockT<TBlockAttributes>>(freeObjectList);
+    return this->template GetFreeObjectListOnAllocatorImpl<SmallFinalizableHeapBlockT<TBlockAttributes>>(freeObjectList);
 }
 
 #endif

+ 4 - 4
lib/Common/Memory/SmallFinalizableHeapBucket.cpp

@@ -60,7 +60,7 @@ SmallFinalizableHeapBucketBaseT<TBlockType>::GetNonEmptyHeapBlockCount(bool chec
     size_t currentHeapBlockCount =  __super::GetNonEmptyHeapBlockCount(false)
         + HeapBlockList::Count(pendingDisposeList)
         + HeapBlockList::Count(tempPendingDisposeList);
-    RECYCLER_SLOW_CHECK(Assert(!checkCount || heapBlockCount == currentHeapBlockCount));
+    RECYCLER_SLOW_CHECK(Assert(!checkCount || this->heapBlockCount == currentHeapBlockCount));
     return currentHeapBlockCount;
 }
 #endif
@@ -175,7 +175,7 @@ SmallFinalizableHeapBucketBaseT<TBlockType>::TransferDisposedObjects()
             heapBlock->TransferDisposedObjects();
 
             // in pageheap, we actually always have free object
-            Assert(heapBlock->HasFreeObject<false>());
+            Assert(heapBlock->template HasFreeObject<false>());
         });
 
 #ifdef RECYCLER_PAGE_HEAP
@@ -241,9 +241,9 @@ SmallFinalizableHeapBucketBaseT<TBlockType>::Verify()
         Assert(false);
     }
 
-    HeapBlockList::ForEach(this->pendingDisposeList, [&recyclerVerifyListConsistencyData](TBlockType * heapBlock)
+    HeapBlockList::ForEach(this->pendingDisposeList, [this, &recyclerVerifyListConsistencyData](TBlockType * heapBlock)
     {
-        DebugOnly(VerifyBlockConsistencyInList(heapBlock, recyclerVerifyListConsistencyData));
+        DebugOnly(this->VerifyBlockConsistencyInList(heapBlock, recyclerVerifyListConsistencyData));
         heapBlock->Verify(true);
     });
 #endif

+ 9 - 9
lib/Common/Memory/SmallHeapBlockAllocator.cpp

@@ -4,13 +4,6 @@
 //-------------------------------------------------------------------------------------------------------
 #include "CommonMemoryPch.h"
 
-namespace Memory
-{
-    EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(SmallHeapBlockAllocator)
-
-    template __forceinline char* SmallHeapBlockAllocator<SmallNormalHeapBlock>::InlinedAllocImpl</*canFaultInject*/true>(Recycler * recycler, size_t sizeCat, ObjectInfoBits attributes);
-}
-
 template <typename TBlockType>
 SmallHeapBlockAllocator<TBlockType>::SmallHeapBlockAllocator() :
     freeObjectList(nullptr),
@@ -226,7 +219,7 @@ SmallHeapBlockAllocator<TBlockType>::TrackNativeAllocatedObjects()
     Assert(curr <= (char *)this->freeObjectList);
 
 #if DBG_DUMP
-    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), L"TrackNativeAllocatedObjects: recycler = 0x%p, sizeCat = %u, lastRuntimeAllocatedBlock = 0x%p, freeObjectList = 0x%p, nativeAllocatedObjectCount = %u\n",
+    AllocationVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("TrackNativeAllocatedObjects: recycler = 0x%p, sizeCat = %u, lastRuntimeAllocatedBlock = 0x%p, freeObjectList = 0x%p, nativeAllocatedObjectCount = %u\n"),
         recycler, sizeCat, this->lastNonNativeBumpAllocatedBlock, this->freeObjectList, ((char *)this->freeObjectList - curr) / sizeCat);
 #endif
 
@@ -247,7 +240,7 @@ SmallHeapBlockAllocator<TBlockType>::TrackNativeAllocatedObjects()
     size_t byteCount = ((char *)this->freeObjectList - curr);
 
 #if DBG_DUMP
-    AllocationVerboseTrace(L"TrackNativeAllocatedObjects: recycler = 0x%p, sizeCat = %u, lastRuntimeAllocatedBlock = 0x%p, freeObjectList = 0x%p, nativeAllocatedObjectCount = %u\n",
+    AllocationVerboseTrace(CH_WSTR("TrackNativeAllocatedObjects: recycler = 0x%p, sizeCat = %u, lastRuntimeAllocatedBlock = 0x%p, freeObjectList = 0x%p, nativeAllocatedObjectCount = %u\n"),
         recycler, sizeCat, this->lastNonNativeBumpAllocatedBlock, this->freeObjectList, ((char *)this->freeObjectList - curr) / sizeCat);
 #endif
 
@@ -262,3 +255,10 @@ SmallHeapBlockAllocator<TBlockType>::TrackNativeAllocatedObjects()
 #endif
 }
 #endif
+
+namespace Memory
+{
+    EXPLICIT_INSTANTIATE_WITH_SMALL_HEAP_BLOCK_TYPE(SmallHeapBlockAllocator)
+
+    template __forceinline char* SmallHeapBlockAllocator<SmallNormalHeapBlock>::InlinedAllocImpl</*canFaultInject*/true>(Recycler * recycler, size_t sizeCat, ObjectInfoBits attributes);
+}

+ 2 - 2
lib/Common/Memory/SmallHeapBlockAllocator.h

@@ -88,7 +88,7 @@ private:
     template <class TBlockAttributes>
     friend class SmallHeapBlockT;
 #endif
-#ifdef PROFILE_RECYCLER_ALLOC
+#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY)
     HeapBucket * bucket;
 #endif
 
@@ -155,7 +155,7 @@ SmallHeapBlockAllocator<TBlockType>::PageHeapAlloc(Recycler * recycler, size_t s
         ((TBlockType*)block)->VerifyPageHeapAllocation(memBlock, mode);
 #endif
 
-        PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), L"Allocated from block 0x%p\n", smallBlock);
+        PageHeapVerboseTrace(recycler->GetRecyclerFlagsTable(), CH_WSTR("Allocated from block 0x%p\n"), smallBlock);
 
         // Close off allocation from this block
         this->freeObjectList = (FreeObject*) this->endAddress;

+ 1 - 1
lib/Common/Memory/SmallLeafHeapBlock.cpp

@@ -51,7 +51,7 @@ template <class TBlockAttributes>
 bool
 SmallLeafHeapBlockT<TBlockAttributes>::GetFreeObjectListOnAllocator(FreeObject ** freeObjectList)
 {
-    return GetFreeObjectListOnAllocatorImpl<SmallLeafHeapBlockT<TBlockAttributes>>(freeObjectList);
+    return this->template GetFreeObjectListOnAllocatorImpl<SmallLeafHeapBlockT<TBlockAttributes>>(freeObjectList);
 }
 #endif
 

+ 3 - 3
lib/Common/Memory/SmallNormalHeapBlock.cpp

@@ -76,7 +76,7 @@ template <class TBlockAttributes>
 void
 SmallNormalHeapBlockT<TBlockAttributes>::ScanInitialImplicitRoots(Recycler * recycler)
 {
-    Assert(IsAnyNormalBlock());
+    Assert(this->IsAnyNormalBlock());
 
     uint const localObjectCount = this->objectCount;
     uint const localObjectSize = this->GetObjectSize();
@@ -111,7 +111,7 @@ template <class TBlockAttributes>
 void
 SmallNormalHeapBlockT<TBlockAttributes>::ScanNewImplicitRoots(Recycler * recycler)
 {
-    Assert(IsAnyNormalBlock());
+    Assert(this->IsAnyNormalBlock());
     __super::ScanNewImplicitRootsBase([recycler](void * objectAddress, size_t objectSize)
     {
         // TODO: only interior?
@@ -187,7 +187,7 @@ template <class TBlockAttributes>
 bool
 SmallNormalHeapBlockT<TBlockAttributes>::GetFreeObjectListOnAllocator(FreeObject ** freeObjectList)
 {
-    return GetFreeObjectListOnAllocatorImpl<SmallNormalHeapBlockT<TBlockAttributes>>(freeObjectList);
+    return this->template GetFreeObjectListOnAllocatorImpl<SmallNormalHeapBlockT<TBlockAttributes>>(freeObjectList);
 }
 #endif
 

+ 2 - 0
lib/Common/Memory/StressTest.cpp

@@ -8,7 +8,9 @@
 #include "Common/Int32Math.h"
 #include "DataStructures/List.h"
 #include "Memory/StressTest.h"
+#if !USING_PAL_STDLIB
 #include <malloc.h>
+#endif
 
 typedef JsUtil::BaseDictionary<TestObject*, bool, RecyclerNonLeafAllocator> ObjectTracker_t;
 typedef JsUtil::List<TestObject*, Recycler> ObjectList_t;

+ 9 - 9
lib/Common/Memory/VirtualAllocWrapper.cpp

@@ -111,7 +111,7 @@ PreReservedVirtualAllocWrapper::Shutdown()
     if (IsPreReservedRegionPresent())
     {
         success = VirtualFree(preReservedStartAddress, 0, MEM_RELEASE);
-        PreReservedHeapTrace(L"MEM_RELEASE the PreReservedSegment. Start Address: 0x%p, Size: 0x%x * 0x%x bytes", preReservedStartAddress, PreReservedAllocationSegmentCount,
+        PreReservedHeapTrace(CH_WSTR("MEM_RELEASE the PreReservedSegment. Start Address: 0x%p, Size: 0x%x * 0x%x bytes"), preReservedStartAddress, PreReservedAllocationSegmentCount,
             AutoSystemInfo::Data.GetAllocationGranularityPageSize());
         if (!success)
         {
@@ -187,7 +187,7 @@ LPVOID PreReservedVirtualAllocWrapper::Alloc(LPVOID lpAddress, size_t dwSize, DW
             if (AutoSystemInfo::Data.IsCFGEnabled())
             {
                 preReservedStartAddress = VirtualAlloc(NULL, bytes, MEM_RESERVE, PAGE_READWRITE);
-                PreReservedHeapTrace(L"Reserving PreReservedSegment For the first time(CFG Enabled). Address: 0x%p\n", preReservedStartAddress);
+                PreReservedHeapTrace(CH_WSTR("Reserving PreReservedSegment For the first time(CFG Enabled). Address: 0x%p\n"), preReservedStartAddress);
             }
             else
 #endif
@@ -195,14 +195,14 @@ LPVOID PreReservedVirtualAllocWrapper::Alloc(LPVOID lpAddress, size_t dwSize, DW
             {
                 //This code is used where CFG is not available, but still PreReserve optimization for CFG can be tested
                 preReservedStartAddress = VirtualAlloc(NULL, bytes, MEM_RESERVE, protectFlags);
-                PreReservedHeapTrace(L"Reserving PreReservedSegment For the first time(CFG Non-Enabled). Address: 0x%p\n", preReservedStartAddress);
+                PreReservedHeapTrace(CH_WSTR("Reserving PreReservedSegment For the first time(CFG Non-Enabled). Address: 0x%p\n"), preReservedStartAddress);
             }
         }
 
         //Return nullptr, if no space to Reserve
         if (preReservedStartAddress == NULL)
         {
-            PreReservedHeapTrace(L"No space to pre-reserve memory with %d pages. Returning NULL\n", PreReservedAllocationSegmentCount * AutoSystemInfo::Data.GetAllocationGranularityPageCount());
+            PreReservedHeapTrace(CH_WSTR("No space to pre-reserve memory with %d pages. Returning NULL\n"), PreReservedAllocationSegmentCount * AutoSystemInfo::Data.GetAllocationGranularityPageCount());
             return nullptr;
         }
 
@@ -224,7 +224,7 @@ LPVOID PreReservedVirtualAllocWrapper::Alloc(LPVOID lpAddress, size_t dwSize, DW
                 if ((freeSegments.Length() - freeSegmentsBVIndex < requestedNumOfSegments) ||
                     freeSegmentsBVIndex == BVInvalidIndex)
                 {
-                    PreReservedHeapTrace(L"No more space to commit in PreReserved Memory region.\n");
+                    PreReservedHeapTrace(CH_WSTR("No more space to commit in PreReserved Memory region.\n"));
                     return nullptr;
                 }
             } while (!freeSegments.TestRange(freeSegmentsBVIndex, static_cast<uint>(requestedNumOfSegments)));
@@ -303,7 +303,7 @@ LPVOID PreReservedVirtualAllocWrapper::Alloc(LPVOID lpAddress, size_t dwSize, DW
             freeSegments.ClearRange(freeSegmentsBVIndex, static_cast<uint>(requestedNumOfSegments));
         }
 
-        PreReservedHeapTrace(L"MEM_COMMIT: StartAddress: 0x%p of size: 0x%x * 0x%x bytes \n", committedAddress, requestedNumOfSegments, AutoSystemInfo::Data.GetAllocationGranularityPageSize());
+        PreReservedHeapTrace(CH_WSTR("MEM_COMMIT: StartAddress: 0x%p of size: 0x%x * 0x%x bytes \n"), committedAddress, requestedNumOfSegments, AutoSystemInfo::Data.GetAllocationGranularityPageSize());
         return committedAddress;
     }
 }
@@ -342,7 +342,7 @@ PreReservedVirtualAllocWrapper::Free(LPVOID lpAddress, size_t dwSize, DWORD dwFr
 
         if (success)
         {
-            PreReservedHeapTrace(L"MEM_DECOMMIT: Address: 0x%p of size: 0x%x bytes\n", lpAddress, dwSize);
+            PreReservedHeapTrace(CH_WSTR("MEM_DECOMMIT: Address: 0x%p of size: 0x%x bytes\n"), lpAddress, dwSize);
         }
 
         if (success && (dwFreeType & MEM_RELEASE) != 0)
@@ -355,7 +355,7 @@ PreReservedVirtualAllocWrapper::Free(LPVOID lpAddress, size_t dwSize, DWORD dwFr
             BVIndex freeSegmentsBVIndex = (BVIndex) (((uintptr_t) lpAddress - (uintptr_t) preReservedStartAddress) / AutoSystemInfo::Data.GetAllocationGranularityPageSize());
             AssertMsg(freeSegmentsBVIndex < PreReservedAllocationSegmentCount, "Invalid Index ?");
             freeSegments.SetRange(freeSegmentsBVIndex, static_cast<uint>(requestedNumOfSegments));
-            PreReservedHeapTrace(L"MEM_RELEASE: Address: 0x%p of size: 0x%x * 0x%x bytes\n", lpAddress, requestedNumOfSegments, AutoSystemInfo::Data.GetAllocationGranularityPageSize());
+            PreReservedHeapTrace(CH_WSTR("MEM_RELEASE: Address: 0x%p of size: 0x%x * 0x%x bytes\n"), lpAddress, requestedNumOfSegments, AutoSystemInfo::Data.GetAllocationGranularityPageSize());
         }
         return success;
     }
@@ -402,7 +402,7 @@ AutoEnableDynamicCodeGen::AutoEnableDynamicCodeGen(bool enable) : enabled(false)
         {
             PGET_PROCESS_MITIGATION_POLICY_PROC GetProcessMitigationPolicyProc = nullptr;
 
-            HMODULE module = GetModuleHandleW(L"api-ms-win-core-processthreads-l1-1-3.dll");
+            HMODULE module = GetModuleHandleW(CH_WSTR("api-ms-win-core-processthreads-l1-1-3.dll"));
 
             if (module != nullptr)
             {

+ 36 - 0
lib/Common/Memory/amd64/amd64_SAVE_REGISTERS.S

@@ -0,0 +1,36 @@
+// -------------------------------------------------------------------------------------------------------
+// Copyright (C) Microsoft. All rights reserved.
+// Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
+// -------------------------------------------------------------------------------------------------------
+
+.intel_syntax noprefix
+
+//void amd64_SAVE_REGISTERS(registers)
+//
+//   This method pushes the 16 general purpose registers into the passed in array.
+//   By convention, the stack pointer will always be stored at registers[0]
+//
+//       void* registers[16];
+//       amd64_SAVE_REGISTERS(registers);
+//
+.globl amd64_SAVE_REGISTERS 
+amd64_SAVE_REGISTERS:   
+        mov [rdi+00h], rsp
+        mov [rdi+08h], rax
+        mov [rdi+10h], rbx
+        mov [rdi+18h], rcx
+        mov [rdi+20h], rdx
+        mov [rdi+28h], rbp
+        mov [rdi+30h], rsi
+        mov [rdi+38h], rdi
+        mov [rdi+40h], r8
+        mov [rdi+48h], r9
+        mov [rdi+50h], r10
+        mov [rdi+58h], r11
+        mov [rdi+60h], r12
+        mov [rdi+68h], r13
+        mov [rdi+70h], r14
+        mov [rdi+78h], r15
+        ret
+
+

+ 1 - 1
lib/Common/Util/CMakeLists.txt

@@ -1,4 +1,4 @@
-add_library (Chakra.Common.Util
+add_library (Chakra.Common.Util STATIC
     Pinned.cpp)
 
 target_include_directories (

+ 1 - 1
lib/Parser/Parse.h

@@ -318,7 +318,7 @@ public:
 
 
 #ifdef ENABLE_DEBUG_CONFIG_OPTIONS
-    WCHAR* GetParseType() const
+    LPCWSTR GetParseType() const
     {
         switch(m_parseType)
         {

+ 42 - 16
pal/inc/pal.h

@@ -171,9 +171,7 @@ extern "C" {
 /******************* Compiler-specific glue *******************************/
 
 #ifndef _MSC_VER
-#if defined(CORECLR)
 #define FEATURE_PAL_SXS 1
-#endif
 #endif // !_MSC_VER
 
 #if defined(_MSC_VER) || defined(__llvm__)
@@ -322,6 +320,11 @@ typedef char * va_list;
 
 #endif // !PAL_STDCPP_COMPAT
 
+#if defined(__CLANG__) || defined(__GNUC__)
+#define PAL_GLOBAL __attribute__((init_priority(200)))
+#else
+#define PAL_GLOBAL
+#endif
 /******************* PAL-Specific Entrypoints *****************************/
 
 #define IsDebuggerPresent PAL_IsDebuggerPresent
@@ -485,8 +488,8 @@ typedef long time_t;
 // PAL_InitializeDLL() flags - don't start any of the helper threads
 #define PAL_INITIALIZE_DLL             PAL_INITIALIZE_NONE       
 
-// PAL_InitializeCoreCLR() flags
-#define PAL_INITIALIZE_CORECLR         (PAL_INITIALIZE | PAL_INITIALIZE_EXEC_ALLOCATOR)
+// PAL_InitializeChakraCore() flags
+#define PAL_INITIALIZE_CHAKRACORE         (PAL_INITIALIZE | PAL_INITIALIZE_EXEC_ALLOCATOR)
 
 typedef DWORD (PALAPI *PTHREAD_START_ROUTINE)(LPVOID lpThreadParameter);
 typedef PTHREAD_START_ROUTINE LPTHREAD_START_ROUTINE;
@@ -531,6 +534,12 @@ PALAPI
 PAL_InitializeCoreCLR(
     const char *szExePath);
 
+PALIMPORT
+DWORD
+PALAPI
+PAL_InitializeChakraCore(
+    const char *szExePath);
+
 PALIMPORT
 DWORD_PTR
 PALAPI
@@ -787,16 +796,6 @@ BOOL
     DWORD CtrlType
     );
 
-#ifndef CORECLR
-PALIMPORT
-BOOL
-PALAPI
-GenerateConsoleCtrlEvent(
-    IN DWORD dwCtrlEvent,
-    IN DWORD dwProcessGroupId
-    );
-#endif // !CORECLR
-
 //end wincon.h Entrypoints
 
 // From win32.h
@@ -5360,6 +5359,17 @@ InterlockedIncrement(
     return __sync_add_and_fetch(lpAddend, (LONG)1);
 }
 
+EXTERN_C
+PALIMPORT
+inline
+LONGLONG
+PALAPI
+InterlockedIncrement16(
+    IN OUT SHORT volatile *lpAddend)
+{
+    return __sync_add_and_fetch(lpAddend, (SHORT)1);
+}
+    
 EXTERN_C
 PALIMPORT
 inline
@@ -5576,6 +5586,18 @@ InterlockedExchangeAdd(
     return __sync_fetch_and_add(Addend, Value);
 }
 
+EXTERN_C
+PALIMPORT
+inline
+LONG
+PALAPI
+InterlockedAdd(
+    IN OUT LONG volatile *Addend,
+    IN LONG Value)
+{
+    return InterlockedExchangeAdd(Addend, Value) + Value; 
+}
+    
 EXTERN_C
 PALIMPORT
 inline
@@ -6387,9 +6409,13 @@ PALIMPORT char * __cdecl _strdup(const char *);
 #define alloca  __builtin_alloca
 #endif // __GNUC__
 
+#ifndef NO_PAL_MINMAX
 #define max(a, b) (((a) > (b)) ? (a) : (b))
 #define min(a, b) (((a) < (b)) ? (a) : (b))
-
+#endif
+    
+#define USING_PAL_MINMAX 1
+    
 #endif // !PAL_STDCPP_COMPAT
 
 PALIMPORT PAL_NORETURN void __cdecl exit(int);
@@ -6501,7 +6527,7 @@ PALIMPORT PAL_FILE * __cdecl _wfsopen(const WCHAR *, const WCHAR *, int);
 
 PALIMPORT int __cdecl rand(void);
 PALIMPORT void __cdecl srand(unsigned int);
-
+PALIMPORT errno_t __cdecl rand_s(unsigned int*);
 PALIMPORT int __cdecl printf(const char *, ...);
 PALIMPORT int __cdecl vprintf(const char *, va_list);
 

+ 2 - 0
pal/inc/pal_mstypes.h

@@ -227,6 +227,8 @@ typedef unsigned __int32 uint32_t;
 typedef __int16 int16_t;
 typedef unsigned __int16 uint16_t;
 typedef __int8 int8_t;
+#define __int8_t_defined
+    
 typedef unsigned __int8 uint8_t;
 #endif // PAL_IMPLEMENTATION
 

+ 4 - 14
pal/inc/rt/palrt.h

@@ -138,20 +138,6 @@ typedef enum tagEFaultRepRetVal
 
 #include "pal.h"
 
-/*
-#ifndef PAL_STDCPP_COMPAT
-#ifdef __cplusplus
-#ifndef __PLACEMENT_NEW_INLINE
-#define __PLACEMENT_NEW_INLINE
-inline void *__cdecl operator new(size_t, void *_P)
-{
-    return (_P);
-}
-#endif // __PLACEMENT_NEW_INLINE
-#endif // __cplusplus
-#endif // !PAL_STDCPP_COMPAT
-*/
-
 #include <pal_assert.h>
 
 #if defined(_DEBUG)
@@ -1256,6 +1242,10 @@ typename std::remove_reference<T>::type&& move( T&& t );
 typedef DWORD OLE_COLOR;
 
 #ifndef PAL_STDCPP_COMPAT
+// defined in xmmintrin.h
+typedef float __m128 __attribute__((__vector_size__(16)));
+typedef double __m128d __attribute__((__vector_size__(16)));
+
 // __m128i defined in emmintrin.h
 typedef union __m128i {
     __int8              m128i_i8[16];

+ 41 - 0
pal/inc/unixasmmacros.inc

@@ -0,0 +1,41 @@
+//
+// Copyright (c) Microsoft. All rights reserved.
+// Licensed under the MIT license. See LICENSE file in the project root for full license information.
+//
+
+#define INVALIDGCVALUE -0x33333333 // 0CCCCCCCDh - the assembler considers it to be a signed integer constant
+
+#if defined(__APPLE__)
+#define C_FUNC(name) _##name
+#define EXTERNAL_C_FUNC(name) C_FUNC(name)
+#define LOCAL_LABEL(name) L##name
+#else
+#define C_FUNC(name) name
+#define EXTERNAL_C_FUNC(name) C_FUNC(name)@plt
+#define LOCAL_LABEL(name) .L##name
+#endif
+
+#if defined(__APPLE__)
+#define C_PLTFUNC(name) _##name
+#else
+#define C_PLTFUNC(name) name@PLT
+#endif
+
+.macro LEAF_END Name, Section
+        LEAF_END_MARKED \Name, \Section
+.endm
+
+.macro END_PROLOGUE
+.endm
+
+.macro SETALIAS New, Old
+        .equiv \New, \Old
+.endm
+
+#if defined(_AMD64_)
+#include "unixasmmacrosamd64.inc"
+#elif defined(_ARM_)
+#include "unixasmmacrosarm.inc"
+#elif defined(_ARM64_)
+#include "unixasmmacrosarm64.inc"
+#endif

+ 340 - 0
pal/inc/unixasmmacrosamd64.inc

@@ -0,0 +1,340 @@
+//
+// Copyright (c) Microsoft. All rights reserved.
+// Licensed under the MIT license. See LICENSE file in the project root for full license information.
+//
+
+.macro NESTED_ENTRY Name, Section, Handler
+        LEAF_ENTRY \Name, \Section
+        .ifnc \Handler, NoHandler
+#if defined(__APPLE__)
+        .cfi_personality 0x9b, C_FUNC(\Handler) // 0x9b == DW_EH_PE_indirect | DW_EH_PE_pcrel | DW_EH_PE_sdata4
+#else
+        .cfi_personality 0, C_FUNC(\Handler) // 0 == DW_EH_PE_absptr
+#endif
+        .endif
+.endm
+
+.macro NESTED_END Name, Section
+        LEAF_END \Name, \Section
+#if defined(__APPLE__)
+        .section __LD,__compact_unwind,regular,debug
+        .quad C_FUNC(\Name)
+        .set C_FUNC(\Name\()_Size), C_FUNC(\Name\()_End) - C_FUNC(\Name)
+        .long C_FUNC(\Name\()_Size)
+        .long 0x04000000 # DWARF
+        .quad 0
+        .quad 0
+#endif
+.endm
+
+.macro PATCH_LABEL Name
+        .global C_FUNC(\Name)
+C_FUNC(\Name):
+.endm
+
+.macro LEAF_ENTRY Name, Section
+        .global C_FUNC(\Name)
+#if defined(__APPLE__)
+        .text
+#else
+        .type \Name, %function
+#endif
+C_FUNC(\Name):
+        .cfi_startproc
+.endm
+
+.macro LEAF_END_MARKED Name, Section
+C_FUNC(\Name\()_End):
+        .global C_FUNC(\Name\()_End)
+#if !defined(__APPLE__)
+        .size \Name, .-\Name
+#endif
+        .cfi_endproc
+.endm
+
+.macro NOP_3_BYTE
+        nop dword ptr [rax]
+.endm
+
+.macro NOP_2_BYTE
+        xchg ax, ax
+.endm
+
+.macro REPRET
+        .byte 0xf3
+        .byte 0xc3
+.endm
+
+.macro TAILJMP_RAX
+        .byte 0x48
+        .byte 0xFF
+        .byte 0xE0
+.endm
+
+.macro PREPARE_EXTERNAL_VAR Name, HelperReg
+        mov \HelperReg, [rip + C_FUNC(\Name)@GOTPCREL]
+.endm
+
+.macro push_nonvol_reg Register
+        push \Register
+        .cfi_adjust_cfa_offset 8
+        .cfi_rel_offset \Register, 0
+.endm
+
+.macro pop_nonvol_reg Register
+        pop \Register
+        .cfi_adjust_cfa_offset -8
+        .cfi_restore \Register
+.endm
+
+.macro alloc_stack Size
+.att_syntax
+        lea -\Size(%rsp), %rsp
+.intel_syntax noprefix
+        .cfi_adjust_cfa_offset \Size
+.endm
+
+.macro free_stack Size
+.att_syntax
+        lea \Size(%rsp), %rsp
+.intel_syntax noprefix
+        .cfi_adjust_cfa_offset -\Size
+.endm
+
+.macro set_cfa_register Reg, Offset
+        .cfi_def_cfa_register \Reg
+        .cfi_def_cfa_offset \Offset
+.endm
+
+.macro save_reg_postrsp Reg, Offset
+        __Offset = \Offset
+        mov     qword ptr [rsp + __Offset], \Reg
+        .cfi_rel_offset \Reg, __Offset
+.endm
+
+.macro restore_reg Reg, Offset
+        __Offset = \Offset
+        mov             \Reg, [rsp + __Offset]
+        .cfi_restore \Reg
+.endm
+
+.macro save_xmm128_postrsp Reg, Offset
+        __Offset = \Offset
+        movdqa  xmmword ptr [rsp + __Offset], \Reg
+        // NOTE: We cannot use ".cfi_rel_offset \Reg, __Offset" here, 
+        // the xmm registers are not supported by the libunwind
+.endm
+
+.macro restore_xmm128 Reg, ofs
+        __Offset = \ofs
+        movdqa          \Reg, xmmword ptr [rsp + __Offset]
+        // NOTE: We cannot use ".cfi_restore \Reg" here, 
+        // the xmm registers are not supported by the libunwind
+        
+.endm
+
+.macro PUSH_CALLEE_SAVED_REGISTERS
+
+        push_register rbp
+        push_register rbx
+        push_register r15
+        push_register r14
+        push_register r13
+        push_register r12
+
+.endm
+
+.macro POP_CALLEE_SAVED_REGISTERS
+
+        pop_nonvol_reg r12
+        pop_nonvol_reg r13
+        pop_nonvol_reg r14
+        pop_nonvol_reg r15
+        pop_nonvol_reg rbx
+        pop_nonvol_reg rbp
+
+.endm
+
+.macro push_register Reg
+        push            \Reg
+        .cfi_adjust_cfa_offset 8
+.endm
+
+.macro push_eflags
+        pushfq
+        .cfi_adjust_cfa_offset 8
+.endm
+
+.macro push_argument_register Reg
+        push_register \Reg
+.endm
+
+.macro PUSH_ARGUMENT_REGISTERS
+
+        push_argument_register r9
+        push_argument_register r8
+        push_argument_register rcx
+        push_argument_register rdx
+        push_argument_register rsi
+        push_argument_register rdi
+
+.endm
+
+.macro pop_register Reg
+        pop            \Reg
+        .cfi_adjust_cfa_offset -8
+.endm
+
+.macro pop_eflags
+        popfq
+        .cfi_adjust_cfa_offset -8
+.endm
+
+.macro pop_argument_register Reg
+        pop_register \Reg
+.endm
+
+.macro POP_ARGUMENT_REGISTERS
+
+        pop_argument_register rdi
+        pop_argument_register rsi
+        pop_argument_register rdx
+        pop_argument_register rcx
+        pop_argument_register r8
+        pop_argument_register r9
+
+.endm
+
+.macro SAVE_FLOAT_ARGUMENT_REGISTERS ofs
+
+        save_xmm128_postrsp xmm0, \ofs
+        save_xmm128_postrsp xmm1, \ofs + 0x10
+        save_xmm128_postrsp xmm2, \ofs + 0x20
+        save_xmm128_postrsp xmm3, \ofs + 0x30
+        save_xmm128_postrsp xmm4, \ofs + 0x40
+        save_xmm128_postrsp xmm5, \ofs + 0x50
+        save_xmm128_postrsp xmm6, \ofs + 0x60
+        save_xmm128_postrsp xmm7, \ofs + 0x70
+
+.endm
+
+.macro RESTORE_FLOAT_ARGUMENT_REGISTERS ofs
+
+        restore_xmm128  xmm0, \ofs
+        restore_xmm128  xmm1, \ofs + 0x10
+        restore_xmm128  xmm2, \ofs + 0x20
+        restore_xmm128  xmm3, \ofs + 0x30
+        restore_xmm128  xmm4, \ofs + 0x40
+        restore_xmm128  xmm5, \ofs + 0x50
+        restore_xmm128  xmm6, \ofs + 0x60
+        restore_xmm128  xmm7, \ofs + 0x70
+
+.endm
+
+// Stack layout:
+//
+// (stack parameters)
+// ...
+// return address
+// CalleeSavedRegisters::rbp
+// CalleeSavedRegisters::rbx
+// CalleeSavedRegisters::r15
+// CalleeSavedRegisters::r14
+// CalleeSavedRegisters::r13
+// CalleeSavedRegisters::r12
+// ArgumentRegisters::r9
+// ArgumentRegisters::r8
+// ArgumentRegisters::rcx
+// ArgumentRegisters::rdx
+// ArgumentRegisters::rsi
+// ArgumentRegisters::rdi    <- __PWTB_StackAlloc, __PWTB_TransitionBlock
+// padding to align xmm save area
+// xmm7
+// xmm6
+// xmm5
+// xmm4
+// xmm3
+// xmm2
+// xmm1
+// xmm0                      <- __PWTB_FloatArgumentRegisters
+// extra locals + padding to qword align
+.macro PROLOG_WITH_TRANSITION_BLOCK extraLocals = 0, stackAllocOnEntry = 0, stackAllocSpill1, stackAllocSpill2, stackAllocSpill3
+
+        __PWTB_FloatArgumentRegisters = \extraLocals
+
+        .if ((__PWTB_FloatArgumentRegisters % 16) != 0)
+        __PWTB_FloatArgumentRegisters = __PWTB_FloatArgumentRegisters + 8
+        .endif
+
+        __PWTB_StackAlloc = __PWTB_FloatArgumentRegisters + 8 * 16 + 8 // 8 floating point registers
+        __PWTB_TransitionBlock = __PWTB_StackAlloc
+
+        .if \stackAllocOnEntry >= 4*8
+        .error "Max supported stackAllocOnEntry is 3*8"
+        .endif
+
+        .if \stackAllocOnEntry > 0
+        .cfi_adjust_cfa_offset \stackAllocOnEntry
+        .endif
+
+        // PUSH_CALLEE_SAVED_REGISTERS expanded here
+
+        .if \stackAllocOnEntry < 8
+        push_nonvol_reg rbp
+        mov rbp, rsp
+        .endif
+
+        .if \stackAllocOnEntry < 2*8
+        push_nonvol_reg rbx
+        .endif
+
+        .if \stackAllocOnEntry < 3*8
+        push_nonvol_reg r15
+        .endif
+
+        push_nonvol_reg r14
+        push_nonvol_reg r13
+        push_nonvol_reg r12
+
+        // ArgumentRegisters
+        PUSH_ARGUMENT_REGISTERS
+
+        .if \stackAllocOnEntry >= 3*8
+        mov \stackAllocSpill3, [rsp + 0x48]
+        save_reg_postrsp    r15, 0x48
+        .endif
+
+        .if \stackAllocOnEntry >= 2*8
+        mov \stackAllocSpill2, [rsp + 0x50]
+        save_reg_postrsp    rbx, 0x50
+        .endif
+
+        .if \stackAllocOnEntry >= 8
+        mov \stackAllocSpill1, [rsp + 0x58]
+        save_reg_postrsp    rbp, 0x58
+        lea rbp, [rsp + 0x58]
+        .endif
+
+        alloc_stack     __PWTB_StackAlloc
+        SAVE_FLOAT_ARGUMENT_REGISTERS __PWTB_FloatArgumentRegisters
+
+        END_PROLOGUE
+
+.endm
+
+.macro EPILOG_WITH_TRANSITION_BLOCK_RETURN
+
+        add rsp, __PWTB_StackAlloc
+        POP_CALLEE_SAVED_REGISTERS
+        ret
+
+.endm
+
+.macro EPILOG_WITH_TRANSITION_BLOCK_TAILCALL
+
+        RESTORE_FLOAT_ARGUMENT_REGISTERS __PWTB_FloatArgumentRegisters
+        free_stack      __PWTB_StackAlloc
+        POP_ARGUMENT_REGISTERS
+        POP_CALLEE_SAVED_REGISTERS
+
+.endm

+ 474 - 0
pal/inc/volatile.h

@@ -0,0 +1,474 @@
+//
+// Copyright (c) Microsoft. All rights reserved.
+// Licensed under the MIT license. See LICENSE file in the project root for full license information.
+//
+//
+// Volatile.h
+// 
+
+// 
+// Defines the Volatile<T> type, which provides uniform volatile-ness on
+// Visual C++ and GNU C++.
+// 
+// Visual C++ treats accesses to volatile variables as follows: no read or write
+// can be removed by the compiler, no global memory access can be moved backwards past
+// a volatile read, and no global memory access can be moved forward past a volatile
+// write.
+// 
+// The GCC volatile semantic is straight out of the C standard: the compiler is not 
+// allowed to remove accesses to volatile variables, and it is not allowed to reorder 
+// volatile accesses relative to other volatile accesses.  It is allowed to freely 
+// reorder non-volatile accesses relative to volatile accesses.
+//
+// We have lots of code that assumes that ordering of non-volatile accesses will be 
+// constrained relative to volatile accesses.  For example, this pattern appears all 
+// over the place:
+//
+//     static volatile int lock = 0;
+//
+//     while (InterlockedCompareExchange(&lock, 0, 1)) 
+//     {
+//         //spin
+//     }
+//                
+//     //read and write variables protected by the lock
+//
+//     lock = 0;
+//
+// This depends on the reads and writes in the critical section not moving past the 
+// final statement, which releases the lock.  If this should happen, then you have an 
+// unintended race.
+// 
+// The solution is to ban the use of the "volatile" keyword, and instead define our
+// own type Volatile<T>, which acts like a variable of type T except that accesses to
+// the variable are always given VC++'s volatile semantics.
+// 
+// (NOTE: The code above is not intended to be an example of how a spinlock should be 
+// implemented; it has many flaws, and should not be used. This code is intended only 
+// to illustrate where we might get into trouble with GCC's volatile semantics.)
+// 
+// @TODO: many of the variables marked volatile in the CLR do not actually need to be 
+// volatile.  For example, if a variable is just always passed to Interlocked functions
+// (such as a refcount variable), there is no need for it to be volatile.  A future 
+// cleanup task should be to examine each volatile variable and make them non-volatile
+// if possible.
+// 
+// @TODO: link to a "Memory Models for CLR Devs" doc here (this doc does not yet exist).
+//
+
+#ifndef _VOLATILE_H_
+#define _VOLATILE_H_
+
+// xplat-todo: remove?
+// #ifndef CLR_STANDALONE_BINDER
+// #include "staticcontract.h"
+// #endif
+
+//
+// This code is extremely compiler- and CPU-specific, and will need to be altered to 
+// support new compilers and/or CPUs.  Here we enforce that we can only compile using
+// VC++, or GCC on x86 or AMD64.
+// 
+#if !defined(_MSC_VER) && !defined(__GNUC__)
+#error The Volatile type is currently only defined for Visual C++ and GNU C++
+#endif
+
+#if defined(__GNUC__) && !defined(_X86_) && !defined(_AMD64_) && !defined(_ARM_) && !defined(_ARM64_)
+#error The Volatile type is currently only defined for GCC when targeting x86, AMD64, ARM or ARM64 CPUs
+#endif
+
+#if defined(__GNUC__)
+#if defined(_ARM_) || defined(_ARM64_)
+// This is functionally equivalent to the MemoryBarrier() macro used on ARM on Windows.
+#define VOLATILE_MEMORY_BARRIER() asm volatile ("dmb sy" : : : "memory")
+#else
+//
+// For GCC, we prevent reordering by the compiler by inserting the following after a volatile
+// load (to prevent subsequent operations from moving before the read), and before a volatile 
+// write (to prevent prior operations from moving past the write).  We don't need to do anything
+// special to prevent CPU reorderings, because the x86 and AMD64 architectures are already
+// sufficiently constrained for our purposes.  If we ever need to run on weaker CPU architectures
+// (such as PowerPC), then we will need to do more work.
+// 
+// Please do not use this macro outside of this file.  It is subject to change or removal without
+// notice.
+//
+#define VOLATILE_MEMORY_BARRIER() asm volatile ("" : : : "memory")
+#endif // !_ARM_
+#elif defined(_ARM_) && _ISO_VOLATILE
+// ARM has a very weak memory model and very few tools to control that model. We're forced to perform a full
+// memory barrier to preserve the volatile semantics. Technically this is only necessary on MP systems but we
+// currently don't have a cheap way to determine the number of CPUs from this header file. Revisit this if it
+// turns out to be a performance issue for the uni-proc case.
+#define VOLATILE_MEMORY_BARRIER() MemoryBarrier()
+#else
+//
+// On VC++, reorderings at the compiler and machine level are prevented by the use of the 
+// "volatile" keyword in VolatileLoad and VolatileStore.  This should work on any CPU architecture
+// targeted by VC++ with /iso_volatile-.
+//
+#define VOLATILE_MEMORY_BARRIER()
+#endif
+
+//
+// VolatileLoad loads a T from a pointer to T.  It is guaranteed that this load will not be optimized
+// away by the compiler, and that any operation that occurs after this load, in program order, will
+// not be moved before this load.  In general it is not guaranteed that the load will be atomic, though
+// this is the case for most aligned scalar data types.  If you need atomic loads or stores, you need
+// to consult the compiler and CPU manuals to find which circumstances allow atomicity.
+//
+template<typename T>
+inline
+T VolatileLoad(T const * pt)
+{
+    // STATIC_CONTRACT_SUPPORTS_DAC_HOST_ONLY;
+
+    T val = *(T volatile const *)pt;
+    VOLATILE_MEMORY_BARRIER();
+    return val;
+}
+
+template<typename T>
+inline
+T VolatileLoadWithoutBarrier(T const * pt)
+{
+    // STATIC_CONTRACT_SUPPORTS_DAC_HOST_ONLY;
+
+    T val = *(T volatile const *)pt;
+    return val;
+}
+
+template <typename T> class Volatile;
+
+template<typename T>
+inline
+T VolatileLoad(Volatile<T> const * pt)
+{
+    // STATIC_CONTRACT_SUPPORTS_DAC;
+    return pt->Load();
+}
+
+//
+// VolatileStore stores a T into the target of a pointer to T.  Is is guaranteed that this store will
+// not be optimized away by the compiler, and that any operation that occurs before this store, in program
+// order, will not be moved after this store.  In general, it is not guaranteed that the store will be
+// atomic, though this is the case for most aligned scalar data types.  If you need atomic loads or stores,
+// you need to consult the compiler and CPU manuals to find which circumstances allow atomicity.
+//
+template<typename T>
+inline
+void VolatileStore(T* pt, T val)
+{
+    // STATIC_CONTRACT_SUPPORTS_DAC_HOST_ONLY;
+
+    VOLATILE_MEMORY_BARRIER();
+    *(T volatile *)pt = val;
+}
+
+template<typename T>
+inline
+void VolatileStoreWithoutBarrier(T* pt, T val)
+{
+    // STATIC_CONTRACT_SUPPORTS_DAC_HOST_ONLY;
+
+    *(T volatile *)pt = val;
+}
+
+//
+// Volatile<T> implements accesses with our volatile semantics over a variable of type T.
+// Wherever you would have used a "volatile Foo" or, equivalently, "Foo volatile", use Volatile<Foo> 
+// instead.  If Foo is a pointer type, use VolatilePtr.
+// 
+// Note that there are still some things that don't work with a Volatile<T>,
+// that would have worked with a "volatile T".  For example, you can't cast a Volatile<int> to a float.
+// You must instead cast to an int, then to a float.  Or you can call Load on the Volatile<int>, and
+// cast the result to a float.  In general, calling Load or Store explicitly will work around 
+// any problems that can't be solved by operator overloading.
+// 
+// @TODO: it's not clear that we actually *want* any operator overloading here.  It's in here primarily
+// to ease the task of converting all of the old uses of the volatile keyword, but in the long
+// run it's probably better if users of this class are forced to call Load() and Store() explicitly.
+// This would make it much more clear where the memory barriers are, and which operations are actually
+// being performed, but it will have to wait for another cleanup effort.
+//
+template <typename T>
+class Volatile
+{
+private:
+    //
+    // The data which we are treating as volatile
+    //
+    T m_val;
+
+public:
+    //
+    // Default constructor.  Results in an unitialized value!
+    //
+    inline Volatile() 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+    }
+
+    //
+    // Allow initialization of Volatile<T> from a T
+    //
+    inline Volatile(const T& val) 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        ((volatile T &)m_val) = val;
+    }
+
+    //
+    // Copy constructor
+    //
+    inline Volatile(const Volatile<T>& other)
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        ((volatile T &)m_val) = other.Load();
+    }
+
+    //
+    // Loads the value of the volatile variable.  See code:VolatileLoad for the semantics of this operation.
+    //
+    inline T Load() const
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return VolatileLoad(&m_val);
+    }
+
+    //
+    // Loads the value of the volatile variable atomically without erecting the memory barrier.
+    //
+    inline T LoadWithoutBarrier() const
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return ((volatile T &)m_val);
+    }
+
+    //
+    // Stores a new value to the volatile variable.  See code:VolatileStore for the semantics of this
+    // operation.
+    //
+    inline void Store(const T& val) 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        VolatileStore(&m_val, val);
+    }
+
+
+    //
+    // Stores a new value to the volatile variable atomically without erecting the memory barrier.
+    //
+    inline void StoreWithoutBarrier(const T& val) const
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        ((volatile T &)m_val) = val;
+    }
+
+
+    //
+    // Gets a pointer to the volatile variable.  This is dangerous, as it permits the variable to be
+    // accessed without using Load and Store, but it is necessary for passing Volatile<T> to APIs like
+    // InterlockedIncrement.
+    //
+    inline volatile T* GetPointer() { return (volatile T*)&m_val; }
+
+
+    //
+    // Gets the raw value of the variable.  This is dangerous, as it permits the variable to be
+    // accessed without using Load and Store
+    //
+    inline T& RawValue() { return m_val; }
+
+    //
+    // Allow casts from Volatile<T> to T.  Note that this allows implicit casts, so you can
+    // pass a Volatile<T> directly to a method that expects a T.
+    //
+    inline operator T() const 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return this->Load();
+    }
+
+    //
+    // Assignment from T
+    //
+    inline Volatile<T>& operator=(T val) {Store(val); return *this;}
+
+    //
+    // Get the address of the volatile variable.  This is dangerous, as it allows the value of the 
+    // volatile variable to be accessed directly, without going through Load and Store, but it is
+    // necessary for passing Volatile<T> to APIs like InterlockedIncrement.  Note that we are returning
+    // a pointer to a volatile T here, so we cannot accidentally pass this pointer to an API that 
+    // expects a normal pointer.
+    //
+    inline T volatile * operator&() {return this->GetPointer();}
+    inline T volatile const * operator&() const {return this->GetPointer();}
+
+    //
+    // Comparison operators
+    //
+    template<typename TOther>
+    inline bool operator==(const TOther& other) const {return this->Load() == other;}
+
+    template<typename TOther>
+    inline bool operator!=(const TOther& other) const {return this->Load() != other;}
+
+    //
+    // Miscellaneous operators.  Add more as necessary.
+    //
+    inline Volatile<T>& operator+=(T val) {Store(this->Load() + val); return *this;}
+    inline Volatile<T>& operator-=(T val) {Store(this->Load() - val); return *this;}
+    inline Volatile<T>& operator|=(T val) {Store(this->Load() | val); return *this;}
+    inline Volatile<T>& operator&=(T val) {Store(this->Load() & val); return *this;}
+    inline bool operator!() const
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return !this->Load();
+    }
+
+    //
+    // Prefix increment
+    //
+    inline Volatile& operator++() {this->Store(this->Load()+1); return *this;}
+
+    //
+    // Postfix increment
+    //
+    inline T operator++(int) {T val = this->Load(); this->Store(val+1); return val;}
+
+    //
+    // Prefix decrement
+    //
+    inline Volatile& operator--() {this->Store(this->Load()-1); return *this;}
+
+    //
+    // Postfix decrement
+    //
+    inline T operator--(int) {T val = this->Load(); this->Store(val-1); return val;}
+};
+
+//
+// A VolatilePtr builds on Volatile<T> by adding operators appropriate to pointers.
+// Wherever you would have used "Foo * volatile", use "VolatilePtr<Foo>" instead.
+// 
+// VolatilePtr also allows the substution of other types for the underlying pointer.  This
+// allows you to wrap a VolatilePtr around a custom type that looks like a pointer.  For example,
+// if what you want is a "volatile DPTR<Foo>", use "VolatilePtr<Foo, DPTR<Foo>>".
+//
+template <typename T, typename P = T*>
+class VolatilePtr : public Volatile<P>
+{
+public:
+    //
+    // Default constructor.  Results in an uninitialized pointer!
+    //
+    inline VolatilePtr() 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+    }
+
+    //
+    // Allow assignment from the pointer type.
+    //
+    inline VolatilePtr(P val) : Volatile<P>(val) 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+    }
+
+    //
+    // Copy constructor
+    //
+    inline VolatilePtr(const VolatilePtr& other) : Volatile<P>(other) 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+    }
+
+    //
+    // Cast to the pointer type
+    //
+    inline operator P() const 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return (P)this->Load();
+    }
+
+    //
+    // Member access
+    //
+    inline P operator->() const 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return (P)this->Load();
+    }
+
+    //
+    // Dereference the pointer
+    //
+    inline T& operator*() const 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return *(P)this->Load();
+    }
+
+    //
+    // Access the pointer as an array
+    //
+    template <typename TIndex>
+    inline T& operator[](TIndex index) 
+    {
+        // STATIC_CONTRACT_SUPPORTS_DAC;
+        return ((P)this->Load())[index];
+    }
+};
+
+
+//
+// Warning: workaround
+// 
+// At the bottom of this file, we are going to #define the "volatile" keyword such that it is illegal
+// to use it.  Unfortunately, VC++ uses the volatile keyword in stddef.h, in the definition of "offsetof".
+// GCC does not use volatile in its definition.
+// 
+// To get around this, we include stddef.h here (even if we're on GCC, for consistency).  We then need
+// to redefine offsetof such that it does not use volatile, if we're building with VC++.
+//
+#include <stddef.h>
+#ifdef _MSC_VER
+#undef offsetof
+#ifdef  _WIN64
+#define offsetof(s,m)   (size_t)( (ptrdiff_t)&reinterpret_cast<const char&>((((s *)0)->m)) )
+#else
+#define offsetof(s,m)   (size_t)&reinterpret_cast<const char&>((((s *)0)->m))
+#endif //_WIN64
+
+// These also use volatile, so we'll include them here.
+//#include <intrin.h>
+//#include <memory>
+
+#endif //_MSC_VER
+
+//
+// From here on out, we ban the use of the "volatile" keyword.  If you found this while trying to define
+// a volatile variable, go to the top of this file and start reading.
+//
+#ifdef volatile
+#undef volatile
+#endif
+// ***** Temporarily removing this to unblock integration with new VC++ bits
+//#define volatile (DoNotUseVolatileKeyword) volatile
+
+// The substitution for volatile above is defined in such a way that we can still explicitly access the
+// volatile keyword without error using the macros below. Use with care.
+//#define REMOVE_DONOTUSE_ERROR(x)
+//#define RAW_KEYWORD(x) REMOVE_DONOTUSE_ERROR x
+#define RAW_KEYWORD(x) x
+
+// Disable use of Volatile<T> for GC/HandleTable code except on platforms where it's absolutely necessary.
+#if defined(_MSC_VER) && !defined(_ARM_)
+#define VOLATILE(T) T RAW_KEYWORD(volatile)
+#else
+#define VOLATILE(T) Volatile<T>
+#endif
+
+#endif //_VOLATILE_H_

+ 92 - 4
pal/src/CMakeLists.txt

@@ -56,23 +56,111 @@ endif()
 add_compile_options(-fno-builtin)
 add_compile_options(-fPIC)
 
+if(PAL_CMAKE_PLATFORM_ARCH_AMD64)
+  set(ARCH_SOURCES
+    arch/i386/context2.S
+    arch/i386/debugbreak.S
+    arch/i386/processor.cpp
+    )
+endif()
+
 set(SOURCES
+  cruntime/file.cpp
+  cruntime/filecrt.cpp
+  cruntime/finite.cpp
+  cruntime/lstr.cpp
+  cruntime/malloc.cpp
+  cruntime/mbstring.cpp
+  cruntime/misc.cpp
+  cruntime/misctls.cpp
+  cruntime/path.cpp
+  cruntime/printf.cpp
+  cruntime/printfcpp.cpp
+  cruntime/silent_printf.cpp
+  cruntime/string.cpp
+  cruntime/stringtls.cpp
+  cruntime/thread.cpp
+  cruntime/wchar.cpp
+  cruntime/wchartls.cpp
+  safecrt/mbusafecrt.c
+  safecrt/safecrt_input_s.c
+#  safecrt/safecrt_output_l.c
+  safecrt/safecrt_output_s.c
+  safecrt/safecrt_winput_s.c
+  safecrt/safecrt_woutput_s.c
   safecrt/memcpy_s.c
+  safecrt/sprintf.c
+  safecrt/sscanf.c
+  safecrt/strcat_s.c
+  safecrt/strcpy_s.c
+  safecrt/strncat_s.c
+  safecrt/strncpy_s.c
+  safecrt/vsprintf.c
+  safecrt/wcscpy_s.c
+  safecrt/wcsncpy_s.c
+  safecrt/xtoa_s.c
+  safecrt/xtow_s.c
+  debug/debug.cpp
+  exception/seh.cpp
+  exception/signal.cpp
+  file/directory.cpp
+  file/file.cpp
+  file/filetime.cpp
+  file/path.cpp
+  file/shmfilelockmgr.cpp
+  handlemgr/handleapi.cpp
+  handlemgr/handlemgr.cpp
+  init/pal.cpp
+  init/sxs.cpp
+  loader/module.cpp
+  loader/modulename.cpp
+  locale/unicode.cpp
+  locale/unicode_data.cpp
+  locale/utf8.cpp
+  map/common.cpp
+  map/map.cpp
+  map/virtual.cpp
+  memory/heap.cpp
+  memory/local.cpp
+  misc/bstr.cpp
+  misc/dbgmsg.cpp
+  misc/error.cpp
+  misc/environ.cpp
+  misc/random.cpp
+  misc/strutil.cpp
+  misc/time.cpp
+  misc/sysinfo.cpp
+  misc/utils.cpp
+  objmgr/palobjbase.cpp
+  objmgr/shmobject.cpp
+  objmgr/shmobjectmanager.cpp
+  shmemory/shmemory.cpp
+  synchobj/mutex.cpp
+  synchmgr/synchcontrollers.cpp
+  synchmgr/synchmanager.cpp
+  synchmgr/wait.cpp
+  sync/cs.cpp
+  thread/context.cpp
+  thread/process.cpp
+  thread/thread.cpp
+  thread/threadsusp.cpp
+  thread/tls.cpp
 )
 
-add_library(coreclrpal
+add_library(Chakra.Pal
   STATIC
   ${SOURCES}
+  ${ARCH_SOURCES}
 )
 
 if(CMAKE_SYSTEM_NAME STREQUAL Linux)
   if(PAL_CMAKE_PLATFORM_ARCH_AMD64)
-    target_link_libraries(coreclrpal
+    target_link_libraries(Chakra.Pal
       unwind-x86_64
     )
   endif()
     
-  target_link_libraries(coreclrpal
+  target_link_libraries(Chakra.Pal
     gcc_s
     pthread
     rt
@@ -84,4 +172,4 @@ if(CMAKE_SYSTEM_NAME STREQUAL Linux)
 endif(CMAKE_SYSTEM_NAME STREQUAL Linux)
 
 # Install the static PAL library for VS
-install (TARGETS coreclrpal DESTINATION lib)
+install (TARGETS Chakra.Pal DESTINATION lib)

Some files were not shown because too many files changed in this diff