Search Documents
Search Again
Search Again
Refine Search
Refine Search
- Relevance
- Most Recent
- Alphabetically
Sort by
- Relevance
- Most Recent
- Alphabetically
-
Reservoir Engineering – Laboratory Research - Miscible-Type Waterflooding: Oil Recovery with Micellar SolutionsBy W. B. Gogarty, W. C. Tosch
A new recovery process for producing oil under both secondary and tertiary conditions utilizes the unique properties of micellar solutions (also known as microemulsions, swollen micelles, and soluble oils). These solutions, which displace 100 percent of the oil in the reservoir contacted, can be driven through the reservoir with water and are stable in the presence of reservoir water and rock. Basic components of micellar solutions are surfactant, hydrocarbon and water. They may also contain small amounts of electrolytes and co surfactants such as a1cohol.r. The specific reservoir application dictates the type and concentration of each component. A salient feature of [he process is the capability for mobility control. Micellar solution slug mobility, by way of viscosity control, is made equal to or less than the combined oil and water mobility. Mobility control continues with a mobility buffer that prevents drive water from contacting the micellar solution. Laboratory and field flooding have proven that the process is technically feasible and that surfactant losses by adsorption on porous media are small. Introduction projects are under way to recover the maximum amount of oil under the most favorable economic conditions.' : New techniques are being developed to increase oil recovery,3" Polymer solutions are becoming an important means of controlling mobility in a waterflood. Thermal methods such as in-situ combustion and steam injection are being used in reservoirs containing highly viscous crudes. Surfactant flooding is receiving attention as a method of reducing interfacial tension to increase recovery.*'" Exotic recovery processes have been considered primarily for ' perations. Economics are unfavorable in most cases for tertiary recovery. studies at the Denver Research Center of the Marathon oil CO. have led to a new oil recovery method.* Micellar solutions (sometimes called microemulsions, swollen micelles, and soluble oils) are used to recover oil by miscible-type waterflooding. Basically, these solutions contain surfactant, hydrocarbon, and water. The method can be used in either secondary or tertiary operations. First, thc concept of thc process is considered in terms of the requirements for an effective miscible waterflood ing operation. Next, micellar solution properties are described including structure, composition, and phase behavior with reservoir fluids. Fluid characteristics are then considered as related to mobility control, and, finally, laboratory and field results are presented to illustrate the efficiency of the process. Concept of the Process Unit displacement efficiency and conformance determine the effectiveness of any oil recovery mechanism. In theory, a miscible waterflood should be capable of a 100-percent unit displacement efficiency with a correspondingly high conformance. Requirements for the slug of a miscible waterflood include (1) 100-percent displacement of oil in the reservoir contacted, (2) controllable mobility, (3) the capability of being driven through the reservoir with water, (4) a low unit cost to enhance economics, and (5) the ability to remain stable in the presence of reservoir water and rock. Micellar solutions satisfy requirements for the slug of a miscible waterflood process. Our discovery that these solutions acted as though they were miscible by displacing all fluids in the reservoir and by being displaced by water solved the miscibility problem. Adequate mobility control is possible by variations in solution viscosity through compositional changes. Economic requirements are met since micellar solution costs below $6/bbl appear possible, Mi cellar solutions stabilize surfactant in the presence of reservoir rock and water, thus reducing the importance of the problem of surfactant loss by adsorption. Fig. 1 illustrates schematically how these solutions are used. Operations start with injection of a micellar solution slug that serves as the oil displacing agent. Next, a mobility buffer of either a water-external emulsion or water solution containing polymer (thickened water) is injected to protect the slug from water invasion. Finally, drive water (water used in a regular waterflood) is injected to propel the slug and mobility buffer through the reservoir. Reservoir oil and water are displaced ahead of the slug, and a stabilized oil and water bank develops as shown in Fig. 1. Stabilized bank saturations are independent of original oil and water saturations. This means that, for a particular type of reservoir, the displacement mechanism is the same under secondary and tertiary recovery conditions. Oil is produced first in a secondary operation. For tertiary conditions, water is produced first. Movement of the slug through the reservoir is stabilized by the mobility buffer. An unfavorable mobility ratio usually exists at the interface between the buffer and drive
Jan 1, 1969
-
Producing-Equipment, Methods and Materials - Two Bottom-Hole Pressure Instruments Providing Automatic Surface RecordingBy R. H. Kolb
A long term project at Shell Development Co.'s Exploration and Production Research Laboratory has been the improvement of the accuracy and the ease of BHP measurements. As a result of these efforts, two complete and separate systems have now been built for the automatic logging of BHP variations. The first of these is a small-diameter instrument suitable for running through production tubing on a single-conductor well cable. During the development of this instrument, as much emphasis was placed on providing a high degree of usable sensitivity and repeatable accuracy as on obtaining the advantages of surface recording. The second system combines the benefits of automatic, unattended recording with the convenience of a permanently installed Maihak BHP transmitter.' THE CABLE INSTRUMENT For many years the standard instrument for BHP determination has been the wireline-operated Amerada recording pressure gauge or one of several other similar devices. This gauge records on a small clock-driven chart carried within the instrument, and although relatively precise readings from the chart are possible, they are difficult to ob-tain. a Both the maximum recording time and the resolution of the time measurements are limited by chart size, and when a slow clock is required for long tests, the precision of the time measurement is often inadequate. Since it is impossible to determine the data being recorded until the gauge has been returned to the surface, wasted time often results when a test is protracted beyond the necessary time or when it is terminated too soon and must be re-run. Clock stoppage or other malfunctions which would be immediately apparent with surface recording remains undetected with down-hole recording; the test is continued for its full term with a consequent loss in production time. As new uses for subsurface pressure data evolved, the shortcomings of the wireline instrument became increasingly apparent, and the concurrent development of a surface-recording pressure gauge and the associated high-pressure well cable service unit' was undertaken. Description of the Instrument Because of its ready availability and advanced degree of development, the Amerada bourdon-tube element was chosen as the basic pressure-sensing device. This element converts a given pressure into a proportional angular displacement of its output shaft, and a suitable telemetering system was designed to measure accurately the extent of this displacement and to transmit the measurement to the surface and record it. The telemetering system furnishes a digital record printed on paper tape by an adding machine-type printer. The present arrangement provides a resolution of one part in 42,000 over the angular equivalent of full-scale deflection, giving a usable sensitivity of better than 0.0025 per cent of full scale. An additional refinement simultaneously records on the tape the time or the depth of the measurement, also in digital form. When the instrument is placed in operation, an adjustable programer can be set to initiate a read-out cycle automatically at selected time intervals. When subsurface pressures are changing rapidly, readings may be recorded as frequently as once every 10 seconds; when pressures are more nearly stabilized, the period between readings may be extended to as much as 30 minutes. Because the instrument is surface-powered as well as surface-recording, the maximum period of continuous logging is (for all prac. tical purposes) unlimited. The subsurface instrument is a tubular tool, 1 1/4-in. in diameter and 6.5 ft in length, operating on 12,000 ft of conventional 3/16-in. IHO logging cable. The transmitting section, mounted above the bourdon-tube element in place of the regular recording mechanism, contains no fragile vacuum tubes or temperature-sensitive transistors. This unit has been laboratory-tested to 1 0,000 psi and 300°F and has performed dependably during a number of field operations. The down-hole transmitting arrangement can be fitted to any standard Amerada pressure element, regardless of range and with no modification of the element itself. Calibration To obtain a repeatability commensurate with the sensitivity and resolution of the instrument, it was necessary to develop a special calibrating technique. The manufacturers of the Amerada recording pressure gauge claim an accuracy of only 0.25 per cent of full scale, which is a realistic figure for normal calibrating and operating procedures. An exhaustive investigation was made of the errors inherent in the bourdon-tube element, itself, independent
-
Geophysics and Geochemistry - Some Problems in Geothermal ExplorationBy T. S. Lovering
The use of geothermal energy is expanding very rapidly. This type of energy has proven commercially profitable for generation of electricity, for space heating, process heating, auxiliary heating of water in conventional steam power plants and for recovery of chemicals contained in natural hot water and steam. Two types of geothermal energy sources are recognized: 1) hot springs in regions of nearly normal heat flow that tap a deep reservoir through which water moves slowly to a hot springs conduit and then rapidly to the surface; 2) hyperthermal areas in which the water is heated by a relatively concentrated heat source related to volcanicity. If there is a geologic trap that provides a geologic analog to a steam boiler, as at Larderello, Italy, the hyperthermal area will have a convecting system that develops superheated water at relatively shallow depth and may provide natural steam in large quantities. If a hyperthermal area is to be productive for a long time, the underflow into the reservoir should be slow enough to allow the heat source and convective system to heat the underflow to the working temperature, and the production rate must not exceed this rate of underflow. A model based on a typical aquifer suggests that the rate of movement of water through the reservoir be such that a few years are spent in transit between isotherms that are spaced about 2°F apart. The possibility of finding blind geothermal areas is illustrated by discussion of the techniques developed in evaluating the subsurface temperatures in the East Tintic district of Utah where a map of isotherms at water level (2000 to 2000 ft below the surface) shows that a hyperthermal area may exist a short distance southeast of the mining district. Very nearly all of the energy that man currently uses comes ultimately from the sun's radiation. This includes water power, fuels such as wood, peat, coal and petroleum, the wind and all our animal power. In the paper summarizing a conference on solar energyl6 the average amount of solar energy received daily on the earth is taken at about 1 cal per m2 per min or slightly less than 2 pcal per cm2 per sec; this is almost exactly the amount of energy on the average that the earth liberates in regions of normal geothermal gradient due to its own internal heating. In many places, however, the energy released is many times the average and in some of these hyperthermal areas, geothermal steam is used for generation of electricity, and hot springs are used for heating buildings and private dwellings, process heating, auxiliary heating of water in conventional steam power plants, and chemicals may be recoverable from both hot water and steam. The use of hot springs waters for heating houses goes back hundreds of years but until recently was confined to a few dwellings close to the hot springs. In Korea, some houses had hot spring water channeled through conduits in the floor centuries ago and thus the Koreans can be credited with pioneer development of radiant heating. In Iceland at present nearly a third of the population uses natural thermal water for domestic heating." The Reykjavik system pipes hot spring water at about 94°C throughout the city and has devised insulated double pipes that allow the water to be piped for some 25 km with a drop of only 1°C for every 5 km. The actual cost to the Icelandic consumer is only one-third the cost of heating by imported coal and yet the industry is one of the most profitable in Iceland. The most profitable use of geothermal energy has been its conversion into electricity which can be transmitted economically much greater distances than hot water. The largest installation at the present time is that at Larderello, Italy, where the Count of Larderello began to experiment in the production of electricity from geothermal steam 60 years ago — in 1904. He installed his first steam turbine, with a capacity of only 250 kw, in 1912 as the result of a local quarrel with the power company which furnished the current required in the Larderello chemical industry - an industry that then dated back nearly a century. As experience was gained in drilling deep holes to tap geothermal steam and in converting it to electric power, the capacity of the installation of Larderello gradually increased, but was all destroyed by the Germans during their retreat from Italy in the closing
Jan 1, 1965
-
Institute of Metals Division - Hardenability of Titanium Alloys Calculated from Composition: A Preliminary ExaminationBy L. D. Jaffe
From data found in the literature, a method has been derived for calculating hardenability of titanium alloys from their composition. A single graph gives the contributions of each alloying element. These are simply added to a base hardenability for unalloyed titanium. Results agree satisfactorily with measured harden-abilities. RECENTLY the importance of hardenability of titanium alloys in permitting attainment of high strength and hardness has become apparent. If a titanium-base alloy has adequate hardenability for the section size and quenching practice used, it can generally be heat treated to high strength and hardness either by simple quenching or cooling from temperatures near the ß transus, by quenching and tempering (aging), or perhaps by direct isothermal transformation of ß1,2 If the hardenability is inadequate, high strength cannot be attained. Preliminary charts translating hardenability requirements for shapes quenched in various media into terms of Jominy end quenched hardenability specimens or ideal round sizes are now available for titanium alloys." The method proposed in 1942 by M. A. Grossmann for calculating hardenability of steels from composition has proved to be of great practical value.' It appeared worthwhile to see whether, using information available in the literature, a corresponding method for calculating hardenability of titanium-base alloys could be derived. Data No systematic experimental study of the hardenability of titanium-base alloys has yet been made. There is available, however, a considerable quantity of scattered data bearing on hardenability. Some investigators determined Jominy curves on one or several alloys. Others reported as-quenched hardness for specimens of a few alloys cooled at various rates or in various media. Still others gave the as-quenched hardness for systematically varied composition, using one or several cooling mediums. The measurements utilized in this work were limited to hardness. Strength measurements were not used because a low strength may reflect high hardness combined with brittleness. Since, in general, the hardness of titanium alloys goes through a peak as the cooling rate is varied from extremely fast to extremely slow, the cooling conditions at which peak hardness was found were taken as the measure of hardenability.1,2 When as-quenched hardness was found to drop as the content of an alloying element (believed to increase hardenability) was raised, this was taken as an indication that the cooling rate was faster than that giving peak hardness; otherwise, the hardness should go up because of solid solution hardening. Also, if the hardness increased on tempering at low temperatures, this was again taken to indicate that the cooling rate had been faster than that giving peak hardness. Many of the data did not indicate precisely the cooling conditions for peak hardness but merely set a maximum or a minimum bound, or a bracket. Since the sizes of specimens quenched and the location of the hardness readings were not always given, it was occasionally necessary to estimate reasonable limits for them. Only the nominal compositions were available in some cases. As quenching from below the ß transus is equivalent to starting with a mixture of a and ß phases, whose individual composition would differ from the overall composition of the alloy, only measurements after quenches from above the ß transus were used. Hardness measurements on samples heated during metal-lographic mounting were also discarded. Because of their bulk, the data used will not be reproduced. They have been or will be published elsewhere.5-22 Approach All data were first translated into terms of ideal round size. For conversion from Jominy or quenched round or sheet, the author's graphs were used." For a few furnace cools or helium quenches where cooling rates were given, conversion was made on the basis of equal cooling coefficient.23 Plots of ideal round size for peak hardness against percentage of alloying element were next made for
Jan 1, 1956
-
Reservoir Engineering-Laboratory Research - A Systematic Study of Gas and Water Coning by Potentiometric ModelsBy G. L. Chierici
Starting from Muskat's theory of water and gas coning, maximum permissible oil production rates without water and/or free-gas production have been determined, in a broad range of reservoir and well parameters, using the potentiometric model technique. The main assumptions made are as follows: (1) the reservoir rock is homogeneeous (either isotropic or anisotropic); (2) the volume of the aquifer underlying the oil zone is very small, so that it does not contribute to reservoir energy; and (3) the gas cap expands at a very low rate, so that it can be assumed to be in quasi-static conditions. The results obtained are presented in the form of diagrams which can be used for solving two types of problems: (1) given the reservoir and fluids characteristics, as well as the position and length of the perforated interval, determine the maximum oil production rate without water and/or free-gas production; and (2) given the reservoir and fluids characteristics only, determine the position and length of the perforated interval which optimize the maximum permissible oil production rate, without water and/or free-gas production. INTRODUCTION In oil reservoirs where the oil-bearing formation is underlain by an aquifer which does not participate in the production mechanism, water-coning is a limiting factor to the flow rates of producing wells. Production rates are usually kept to a value that will prevent the water from entering the wells. The entry of water into a well lowers its productivity by increasing the weigbt of the fluid column; moreover, the separation of water from the effluent, at the surface, may constitute a very difficult problem in cases of heavy viscous oils. A similar situation is encountered in oil reservoirs with a gas cap overlying the oil-saturated zone; here a downward gas cone is induced by the flow of oil towards the producing wells. Production rates must be low enough to prevent the gas from being produced; producing gas from the gas cap would be a waste of energy. Of course, water-coning and gas-coning phenomena can occur at the same time in the same reservoir if the oil-producing formation is both overlain by a gas zone and underlain by a water zone. Due to its relevant practical importance, the mechanism of coning was studied by many people.2,3,5-8 Defining the conditions for getting the maximum water-free and/or gas-free oil production rate is a difficult problem, often encountered under one of the following aspects: 1. Predict the maximum flow rate that can be assigned to a completed well without the simultaneous production of water and/or free-gas. 2. Define the optimum length and position of the interval to be perforated in a well, in order to obtain the maximum water and gas-free production rate. A systematic study of these problems was made by means of the electrical analog technique. The results of this study are presented here, under the form of a set of curves providing solutions for the above stated problems. These curves are valid only for homogeneous forrnations, either isotropic or anisotropic. Should the formation be non-homogeneous (by horizontal or vertical variation of permeability, shale diaphragms, fractures, etc.), a specific potentiometric study would be required for each specific case. Especially when shale diaphragms of some radial extension are present, the critical rates observed are much larger than would be expected from the diagrams. STATEMENT OF THE PROBLEM In the present study the aquifer is supposed to be of such limited volume that it does not contribute to the energy of the reservoir. Moreover, the gas cap is supposed to expand at such a low rate that the potential gradient in the gas cap is negligible. Under static conditions water-oil and gas-oil interfaces (T1 and T2) are both horizontal. When the reservoir production starts, below each well these interfaces take a cone-like shape (Fig. 1) having as an axis the axis of the well. This shape results from the equilibrium between potential gradients in the oil zone and gravitational forces due to density differences between oil and water and between oil and gas. Assuming the oil-bearing formation to be homogeneous and the oil to be incompressible, the analysis of the problem (see Appendix) shows that the oil-water and gas-oil interfaces are stable only if the oil production rate of the well is not higher than the following values.
Jan 1, 1965
-
Institute of Metals Division - Misfit Strain Energy in the Au-Cu SystemBy Ralph Hultgren
IN solid solutions atoms of differing sizes occupy the same crystalline lattice, requiring that some of them be compressed and others expanded. The energy involved has been called misfit strain energy and is an important concept of crystal chemistry. If the atomic sizes and elastic constants of interatomic bonds are known, the misfit energy may be calculated,' provided certain simplifying assumptions are allowable. Usually, isotropic crystals are assumed and interatomic distances are taken to be the statistical average determined from X-ray diffraction. Such calculations yield values of the misfit energy of the order of 1 or 2 kcal per atom in alloys such as Au-Cu at compositions of 50 atomic pct. However, evidence has accumulated in recent times that atoms change their sizes with composition of alloys, implying electronic rearrangement of the bonds. The size changes have been found particularly by application of the X-ray method developed by Warren, Averbach, and Roberts.' Thus, Averbach, Flinn, and Cohen3 determined radii in Au-Cu alloys. Oriani4 showed that these new radii led to a calculated misfit energy in disordered AuCu, which was decreased from the values calculated by the usual theory more than twenty-fold, to only 80 cal per g atom. Thermodynamic calculations from the phase diagram5 also show misfit energy to be no more than a few hundred calories per g atom in this alloy. The question of what electronic rearrangements are possible therefore becomes compelling in estimating misfit energy. In the following pages the results of certain calculations on the AuCu tetragonal superlattice are submitted. Conclusions drawn from these should be applicable in large degree to disordered solid solutions. As in all ordered states, bonding distances in the superlattice are individually known, rather than being merely average distances as found from lattice constants of disordered states. Moreover, only the Au-Au and Cu-Cu distances are strained; the elastic constants of these are known in the elementary state. In the usual calculation it is necessary to assume elastic constants for Au-Cu bonds. Misfit energy has thus been calculable without the need of many simplifying assumptions usually made. It is still assumed that equilibrium bond lengths and elastic properties of the bonds are the same in the alloy as in the pure metals. As previously discussed, this is probably not correct. Also assumed is that the bonds are not affected by strain of neighboring bonds. A calculation of Young's modulus from compressibility data shows this to be far from true; extensive electronic rearrangements take place. It would seem that misfit energy cannot be calculated from elasticity data for the elements. The usual methods may, however, give an upper limit which is often much higher than the true value. The question of electronic rearrangement is, of course, a complex one. Pauling's theory gives a simple, approximate treatment of the relation between type of bond and bond distance. This has been applied with some success to the Au-Cu system, as will be shown in a later section. Misfit Energy in Au-Cu Alloys Hume-Rothery and Raynor6 discuss the Au-CU system as a type example of strain energy. The gold atom is 12.8 pct larger in diameter than the copper atom, near the size factor limit beyond which solid solubility is severely restricted. They therefore consider the misfit energy to be large, a conclusion for which they believe they find evidence in the phase diagram. Gold and copper are completely miscible in the solid state, but the alloy has a minimum melting point at an intermediate composition. From this Hume-Rothery and Raynor conclude that the strain energy is nearly large enough to prevent miscibility; the phase diagram tends toward a eutec-tic type. In Ag-Cu, which has almost identical size relationships, solid miscibility is quite limited; whereas in Au-Ag, where atomic sizes are nearly the same, there is complete miscibility without a minimum in the melting point. From their arguments the heat of formation of Au-Cu would be expected to be endothermic or only slightly exothermic, that of Ag-Cu to be endothermic, and that of Au-Ag to be exothermic. Deviations, from Ve-gard's law of additivity of atomic radii support these conclusions, since Au-Cu and Ag-Cu both have pronounced positive deviations, and Au-Ag has a negative deviation. Nevertheless, Au-Cu alloys form exothermically; indeed, considerably more exothermically than Au-Ag, Table I. Hence, strain energy must be much less important in this case than Hume-Rothery and Raynor have supposed.
Jan 1, 1958
-
Institute of Metals Division - The Influence of Hydrogen on the Tensile Properties of ColumbiumBy R. D. Daniels, T. W. Wood
The tensile properties of columbium and Cb-H alloys containing up to 455 ppm H were studied as a function of temperature and strain rate. Hydrogen, introduced into columbium at elevated temperatures, using a thermal -equilibrium technique, embrittled columbium most severely at about —77°C. This elnbrittle ment occurred even at hydrogen concentrations of an order of 20 ppm. At higher temperatures, the hydrogen tolerance of columbium increased in relation to the increased solubility of hydrogen in tile metal. Below this temperature hydrogen tolerance, as determined by ductility and fracture stress, increased slightly. Strain rate had little effect on the tensile results for cross-head speeds over the range 0.002 to 2.0 in. per min. Strain aging during the tensile test appears to explain the ductility mininmum at —77°C. The apparent increase in hydrogen tolerance at lower temperatures is attributed to the low mobility of hyhogen. Experiments were performed in which samples were prestrained in tension at room temperature and tested to failure at —196°C. Results suggest that hydrogetz segregation to preformed crack nuclei can cause subsequent embrittlement even at temperatures where hydrogen mobility is too low to cause embrittlement in a normal tensile test. COLUMBIUM is an example of the class of bcc metals with ductile-brittle transition temperatures sensitive to the presence of interstitial atom contaminants. Hydrogen is one of these embrittling contaminants. The embrittling effect of hydrogen is less potent, perhaps, in columbium than in some of the other bcc refractory metals, but it is still a problem of both theoretical and practical interest. Unlike hydrogen in iron and steels, hydrogen in columbium is exothermically rather than endo-thermically occluded. The embrittlement process in exothermic systems has not been studied as extensively as that in endothermic systems, especially at hydrogen concentrations below the limit of solubility. The purpose of this investigation was to evaluate the embrittlement process in initially pure columbium as a function of hydrogen content, temperature, and strain rate. The Cb-H phase diagram, according to Albrecht et al.,1 is shown in Fig. 1. Columbium reacts exothermically with hydrogen producing a solid solution at concentrations of less than about 250 ppm (parts per million by weight) H at room temperature. At concentrations above the highly temperature-dependent solvus a second phase is formed. Like many similar hydrogen-metal systems,2 his system exhibits a miscibility gap with respect to hydrogen solution. Albrecht found the critical temperature of the miscibility gap to be about 140°C, the critical concentration to be 0.23 atom fraction hydrogen, and the critical pressure to be 0.01 mm Hg. Above 140°C there is a solid solution of increasing lattice constant extending across the phase diagram. Hydrogen concentrations of particular interest in this investigation were those below the limit of solubility in columbium. At hydrogen concentrations above the limit of solubility, columbium will contain the hydrogen-rich second phase and will be brittle under most testing conditions because the hydride generally precipitates as platelets with coincident matrix lattice strains.1'3 At hydrogen concentrations below the limit of solubility, the tensile behavior of columbium is expected to be more sensitive to the interrelationships between hydrogen concentration and mobility and the testing variables such as temperature and strain rate. Literature references to the hydrogen embrittlement of metals, especially ferrous alloys and titanium alloys, are too voluminous to mention. It is only recently, however, that detailed studies of the hydrogen embrittlement of columbium have been undertaken. Wilcox et a1.4 studied the strain rate and temperature dependences of the low-temperature deformation behavior of fine-grained are-melted columbium (1 ppm H) and the effect of hydrogen content (1,9, and 30 ppm H) on the mechanical behavior of columbium at a series of temperatures for a single strain rate. A strain-aging peak was ob-served at about -50°C which was attributed to the presence of hydrogen in the metal. Eustice and carlson5 studied the effect of hydrogen on the ductility of V-Cb alloys at a series of temperatures over the range -196° to 25°C. Pure columbium was embrittled by 20 ppm H which produced a ductility transition at approximately -70°C. Ingram et al.6 studied the effect of oxygen and hydrogen on the tensile properties of columbium and tantalum. A minimum in the notched-to-unnotched tensile ratio of hydrogenated columbium was obtained at about -75°C, but because of the relatively large hydrogen content employed (200 and 390 ppm) the ductility
Jan 1, 1965
-
Extractive Metallurgy Division - Reaction of Silver with Aqueous Solutions of Cyanide and OxygenBy G. A. Deitz, J. Halpren
The kinetics of dissolution of silver in cyanide solutions under oxygen pressure have been investigated over a wide range of conditions with a view to establishing the reactions involved and the factors which influence the rate. The results indicate that the rate is determined principally by the transport of reactants in solution to the silver surface. The thermodynamic features of the reaction with particular reference to the influence of pH and CN- are also discussed and summarized in the form of potential-pH diagrams. METALLIC silver is readily attacked and dissolved by cyanide solutions in the presence of oxygen. This reaction has long been recognized and extensively applied1 in the well-known cyanide process for the extraction of silver from its ores. While this process has been subjected in the past to a number of investigations2,3 both of a practical and fundamental nature, some features relating to the chemistry of the reactions involved, the role of oxygen, the nature of the rate-controlling step, and the effects of pH and certain reagents such as lime, are still not fully understood. It was felt that this situation justified a further investigation of the reaction, made with a view to obtaining a better understanding of its kinetics and mechanism. In the present investigation rate curves for the dissolution of silver were determined over an extensive range of carefully controlled reaction conditions. By carrying out the reactions in a pressure vessel, the partial pressure of oxygen, and hence its concentration, could be varied widely. The influence on the reaction of other variables including temperature, pH, and the concentrations of cyanide, peroxide, and other salts was also examined. The results of these kinetic studies are presented and discussed in this paper, together with a summary of available thermodynamic information relating to the possible reactions which can accompany the attack on silver by aqueous solutions of cyanide and oxygen. Chemistry and Thermodynamics of the Reaction The chemistry of the reactions of gold and silver with aqueous cyanide solutions has been the subject of several recent reviews.'." It has been proposed' that the following reaction occurs when silver dissolves in cyanide solutions in the presence of oxygen: 4Ag + 8CN- + 0, + 2H.0 -t 4Ag(CN); + 40H- [la] An equation of this form, for the corresponding dissolution of gold, was first suggested by Elsner." Bodlaender5 proposed a similar overall reaction, but suggested that it took place in two steps, 2Ag + 4CN- + O2 + 2H2O ? 2Ag(CN),- + 2OH- + H2O2 [2a] followed by 2Ag + 4CN- + H2O2? 2Ag(CN); + 20H- [3a] On the other hand, Janin- suggested that the dissolution of gold can occur with the liberation of hydrogen, rather than by reduction of oxygen. The corresponding equation for the dissolution of silver is 2Ag + 4CN- + 2H20 -t 2Ag(CN); + 20H + H2 [4a] All these reactions represent oxidation-reduction processes and may be resolved into the corresponding separate oxidation and reduction components. The oxidation step involves the formation of the argentocyanide ion and is the same for all the reactions, i.e., Ag + 2CN? Ag(CN)2 + e [51 However, reactions la, 2a, 3a, and 4a differ in the nature of the reduction steps which are, respectively, O2 + 2H2O + 4e ? 4OH- [lb] O2 + 2H2O + 2e ? H2O2 + 2OH- [2b] H2O2 + 2e ? 2OH- [3b] 2H2O + 2e? 20H- + H2 [4b]
Jan 1, 1954
-
Geophysics - The Gravity Meter in Underground ProspectingBy W. Allen
FOR the past six years gravity surveys have been used for underground prospecting in the copper mines at Bisbee, Ariz. The primary purpose of the surveys has been to reduce the diamond drilling and crosscutting necessary for exploration. Since many of the orebodies are small, and geologic control is not always apparent, any information that will direct the drilling and crosscutting is highly desirable. Because of extensive development and exploration work in the copper mines at Bisbee, it has been possible to cover more than 630,000 ft of crosscuts on 30 levels with the gravity surveys. In the process the gravity procedures have been refined to a high degree. Density Contrast: For a gravity survey to be successful, a sufficient density contrast must exist between the geologic feature sought and surrounding host rocks. Most mineralized areas will provide this contrast if fairly massive bodies are present. In the Bisbee area the entire sequence of formations, except for alluvium, appears to have specific gravities ranging from 2.65 to 2.70. These values have been determined by means of a large number of cut samples and diamond drill cores. As a further check, vertical gravity differences have been used where nonmineralized sections are known to occur.' The only known major gravity disturbances result from mineralization that has increased the density and the voids that have decreased density. The voids are caused by mining operations and by underground water movement that has developed several areas of caverns. Equipment: While not absolutely essential, a small rugged gravity meter, such as the Worden meter, is highly desirable. A tall tripod, about the height of a transit tripod, permits instrument set-ups in deep water and in locations where fallen timber and muck piles make it impossible to use a short tripod. An additional advantage of a tall tripod is that it places the meter in the center of the crosscut, reducing the error caused by the crosscut void. Size and weight are important, since the only satisfactory means of operating the meter underground is to carry it by hand. A backpack can be used in rare instances but is usually a hindrance because of the close station spacing. The operator's ability to move through tight clearances will improve survey coverage, as it is then possible to move through raises and caved areas and to pass mine cars and machinery with a minimum of trouble. Station Control: Gravity stations are normally located every 100 ft along the crosscuts, at each intersection, and in the face of all stub crosscuts. In areas of high gravity relief, or where small anomalies might be expected, stations may be located at 25 or 50-ft intervals. When possible, the stations should be offset to avoid effects of raises or other voids. The gravity stations on a level are tied to one or more base stations, which are usually located at the shaft or near the portal of an adit. The base stations may be part of a gravity control net that extends to each level in the mine as well as to the surface. Such a net extending throughout the potential area of the surveys is highly desirable, as it is then possible to compare all gravity stations on a uniform basis. The stations that are part of the base net should be carefully established by multiple readings and, if necessary, by a least squares adjustment of the loops. In some instances where levels do not have a shaft station, or where access may be blocked by caving, it may be necessary to establish secondary bases at the top and bottom of the raises that are between levels. Under fair conditions 70 to 90 gravity stations can be located and run in 6 hr by a two-man crew. The best field procedures depend on conditions. Reduction of Field Data: Most of the time required to produce a final gravity map is consumed in processing the data. Each meter reading must be corrected for a minimum of five factors that affect the gravity value in addition to the density contrast being sought. These factors are 1) instrumental drift, 2) station elevation, 3) topography, 4) latitude, and 5) regional gravity gradient. Mine openings, such as stopes and raises, will affect the value. However, it is seldom practical to make corrections for these voids. Usually a rotation is made on the field note on the station, and any
Jan 1, 1957
-
PART V - Papers - Magnetic Analysis of Dilute Binary Alloys of Copper, Zinc and Magnesium in AluminumBy William C. Sleppy
The nmgnetic susceptibility of heat-treatable aluminuin alloys is sensitive to chanyes such as solution or dissolution of solute and the precipitation of mew phases. By measuring the change in the magnetic susceptibility of aluminum alloys caused by various heat treatments, an empirical relation was found from which atomic arrangements in dilute binary alloys of copper, zinc, and magnesiutn in aluminum have been delineated. The relation predicts the ultimate formation of C1LA12 when copper is precipitated from solid solution in aluminum. Euidexce joy silovt- range order is found for copper in solid solution in aluminum in the sense that copper atoms avoid being nearest neighbors to an extent greater than would result from a purely random arrangertzeizt. Hume-Rothery has predicted such short-range order joy solid solution of copper in aluminum The Al-Zn system agrees with evidence obtained from X-ray scattering at small angles and predicts a tendency for zinc atoms to cluster in solid solution in aluminum. In the Al-mg system, the empirical relation indicates an approach to randor distribution of magnesium in solid solution in aluminum with a tendency for magnesium segvegation which increases with incveasing temperature. ThE magnetic properties of metals are complicated by the fact that contributions are made to them both by electrons of a "metallic" type which belong to the crystal as a whole, and by electrons in states localized on particular atoms. An expression1'2 for the bulk magnetic susceptibility of aluminum may be written as the sum of three contributions: where XA1 is the bulk susceptibility of aluminum per gram of material (in the cgs system, the units are those of reciprocal density); Xa1+3 is the diamagnetic contribution of the electrons localized in ion cores; Xa1 is. the paramagnetic spin contribution of conduction electrons often called Pauli paramag-netism: Xa1 is the diamagnetic contribution of the conduction electrons often called Landau diamag-netism. Ion core diamagnetism arises from the precession of the electron orbits which occurs when a magnetic field is applied to a system of electrons moving about a nucleus. Its contribution to the magnetic suscepti- bility is small, temperature-independent, and unaffected by alloying. The conduction electron diamagnetism is also temperature-independent and arises from the translatory motion of the electrons. For perfectly free electrons this contribution should be exactly one-third of the Pauli spin paramagnetism, but this relation is seldom even approximately true. Blythe2 determined the conduction electron diamagnetism in pure aluminum and found it to be extremely small. Any change in the conduction electron diamagnetism caused by alloying is neglected in this work. The Pauli paramagnetic contribution3 to the magnetic susceptibility of aluminum depends upon the number of electrons that occupy excited states and whose spins can be turned parallel to an applied magnetic field. The number of electrons free to turn in the field is proportional to the temperature and each spin contribution to the susceptibility is inversely proportional to the temperature. A slight temperature dependence of Pauli paramagnetism occurs when the number of electrons occupying excited states cannot increase sufficiently to balance the inverse dependence on temperature of each spin contribution. The decrease of the magnetic susceptibility of aluminum with increasing temperature is attributed to a temperature dependence of the Pauli paramagnetism. Estimates of the Pauli paramagnetism of aluminum have been made by several workers.2,4,5 All of the values are in reasonably good agreement with each other. In this work Xal at 17°C is taken as 0.761 X 10-8 cu cm per g. An expression similar to [I] can be written for the magnetic susceptibility of an aluminum base alloy containing a fractional weight percent x of solute:' Xa = (1 -x)XAl+3 +xXsoluteion * XaPauli +Xadia) [2] where X, is the magnetic susceptibility per gram of alloy, Xal'3and Xsolute ion are the ion core diamag-netic contributions, and xpauli and xdia are the Pauli and diamagnetic contributions of conduction electrons in the alloy. If the components of a mixture are not alloyed but simply mixed together in their pure states without producing a new phase, then the magnetic susceptibility of the mixture is given by the Wiedemann additivity law: Xm =x1X1 +x2x2 + ..xnxp [3] where X, is the susceptibility per gram of mixture and xnXp are the weight fractions and susceptibilities, respectively,-. for the pure components. The additivity law is not applicable to alloys because the outer electronic structures of the components are changed by alloying.' Both the Pauli paramagnetism and Landau diamagnetism are affected; hence the magnetic susceptibilitv of an alloy is usually different from that calculated using the additivity law. In this work the difference, X, -X,, is taken as a measure of the change caused by alloying.
Jan 1, 1968
-
Minerals Beneficiation - Sampling and Testing of SinterBy D. J. Carney, R. L. Stephenson
A sampling technique has been developed for procuring a sample of sinter representative of the entire depth of the sintering bed. The sampling method involves the use of an open-bottom metal basket that rides on the grate of the sintering machine and when removed contains a sample of the sintered product. Additional data have been obtained to indicate that the tumbler test is a suitable means of measuring sinter strength. IN the last few years additional sintering facilities have been installed in both the Pittsburgh and the Chicago district of the United States Steel Co. Since the construction of these sintering plants made possible the use of higher percentages of flue-dust sinter in our blast-furnace burdens, it became important to study means of controlling the quality of sinter to obtain optimum results in the blast furnace. For controlling an operating process, it is necessary first to establish standards by which the quality of the product can be judged. For sinter, it appeared that an important property was its strength or its resistance to degradation during transportation and charging into the furnace. Consequently work was undertaken to establish a standard for sinter strength that could be used both for controlling sintering-plant operations and for correlating sinter quality with blast-furnace performance. The first problem in setting up a standard was that of procuring a sample that would be representative of the sinter made under any particular set of conditions at the sintering plant. Since the United States Steel Co. sintering plants discharge the finished sinter either into a large pit or onto a rotary cooler, the sinter becomes inseparably mixed with material sintered 2 hr before or 2 hr afterwards. For this reason the exact identity of the sinter is lost. A sample selected as the cooler is discharged, or as the sinter is removed from the pit, cannot be said to be truly representative of the sinter made at any specific time. Sampling The first attempt to procure a sample that would be representative of a specific sinter mix and of specific operating conditions was made by stopping the Dwight Lloyd sintering machine and removing an entire pallet full of sinter. This method, however, proved very difficult to perform and interfered considerably with the operation of the plant. To overcome this difficulty, a sampling method was devised by technologists at South Works enabling them to secure, without interrupting the sintering operation, a sample of about 1 cu ft of sinter, representative of sinter for the full depth of the sintering bed. The South Works method involves the use of a steel-frame-work basket. A typical basket is shown in Fig. 1. The basket has been used both with and without crossbars along the bottom. As long as the crossbars are in the same direction as the grate bars on the sintering machine they do not interfere with the sintering process. The basket is set on an empty grate of the Dwight Lloyd sintering machine before it passes under the swinging feed spout, see Fig. 2. When the basket is removed after it has travelled the length of the sintering machine, it contains the sample. Just before the basket is removed, the sinter is scored and chipped to facilitate removal of the sample from the sinter bed. A view of the basket after its removal is shown in Fig. 3. Although the sampling method was originally designed for use on a Dwight Lloyd sintering machine, it can also be used on the Greenawalt type of machine. When used on the Greenawalt-type machine, the basket is placed on the sintering grate before the charging car passes over it, and finally it is removed just before the pan is dumped. Testing After a method of obtaining a representative sample of sinter had been developed, the next step was to select a method of measuring its strength. The irregular shape and size of the sinter pieces precluded the use of a simple compression test for determining strength; consequently, the shatter test and tumbler test were investigated. To perform the shatter test, a sample of sinter, approximately 5 lb, is dropped from a hinged-bottom box at a height of 3 ft onto a steel plate. The broken sinter is sieve-analyzed after a specified number of drops. The tumbler test is performed with the use of a standard ASTM coke-tumbling drum. The drum is 3 ft in diam and is equipped with two lifter bars diametrically opposite one another on the inner periphery of the drum. The drum is rotated at a speed of 24 rpm for 200 revolutions, and after tumbling the sample is sieve-analyzed. To express as single numbers the results of sieve analyses after shattering or tumbling, the method suggested by R. E. Powers1 was employed. This method involved plotting the size of the sieve openings on a logarithmic scale and the cumulative per cent larger than each sieve on a probability scale as described by J. B. Austin.' By interpolating from the plotted data, which in most cases approximated
Jan 1, 1954
-
Institute of Metals Division - Strain Hardening and Stress Dependency of Dislocation Velocity in Alpha-Iron Alloys with a Dispersed PhaseBy J. O. Brittain, E. P. Lautenschlager, F. Felberbauer
This investigation was undertaken to evaluate the effect of a dispersed phase in a iron upon the strain hardening and the stress dependency of dislocation velocity as inferred from the strain-rate sensitivitv. Experiments were made in a-iron specimens with controlled amounts of carbon and/or A12O3. They were tested at various temperatures between 195' and 373 'K under a variety of conditions oj-concentration, strain rate, and quench aging. The resulting yield strength, flow strength, and strain hardening increased with addition of Al2O3. The parameter m, measured from changes in strain rate, was xsed to describe the stress dependency of the dislocation velocity. m was found to increase with additions of A12O3, increasing carbon in solution, and with increasing temperature. When applied to a recent model for yielding based upon dislocation multiplication and velocity characteristics, the values of m alone did not successfully predict yielding for the materials of this investigalion, but had to be adjusted with a consideration of the number of unlocked dislocations nucleated heterogeneously at discontinuities or inclusions. The change of yield strength with Al2O3 Interpar-ticle spacing appeared to obey a theory of Orowan. EVER since the concept of dislocations has been introduced to explain plastic deformation in crystalline materials, the origin of dislocations and the source of the number of dislocations required to account for plastic deformation have been persistent questions. Ruling out the possibility of homogeneous nucleation,' Gilman2 drew attention to the various cases of heterogeneous nucleation as the source of dislocations, whereas Frank and Read3 introduced the Frank-Read source. However, Gil-man found that the Frank-Read sources did not play a dominant role in the deformation of LiF crystals, but that small precipitates are often, if not always, associated with the nucleation of dislocations. In LiF crystals, however, most of the dislocations arose through the multiplication of moving dislocations in a mechanism described by Koehler4 and Orowan,5 Thus, relatively few precipitates could account for a large number of dislocations. A similar observation of dislocation nucleation at inclusions of precipitates was made by Stein and LOW' on Si-Fe crystals. Their crystals contained
Jan 1, 1964
-
Institute of Metals Division - The Effect of Alloying Elements on the Plastic Properties of Aluminum AlloysBy P. Pietrokowsky, T. E. Tietz, J. E. Dorn
The amount of solid solution hardening in aluminum alloys was found to be dictated by two factors: the lattice strain, and the change in the mean number of free electrons per atom of the solid solution. To obtain this correlation it was necessary to assume that aluminum contributes two electrons per atom to the metallic bond. WHEN the modern scientific method of analysis was first being formulated, Francis Bacon recorded in his "Essays" (circa 1600) that "an alloy . . . will make the purer but softer metal capable of longer life." During the intervening centuries voluminous data have been reported which demonstrate that the additions of alloying elements do in fact increase the hardness and strength of the pure metals. Nevertheless, the significant details of this problem on the unique effect of each element toward enhancing the mechanical properties of alloys only recently have been subjected to systematic scientific scrutiny. The major objective of this investigation is to determine how minor additions of alloying elements affect the plastic properties of polycrystalline aluminum alloys. By means of such studies it is hoped to provide not only data on the solution strengthening of aluminum alloys, but also a body of facts which will supplement the knowledge already available on the factors responsible for solution hardening in general. A review1"10 and analysis1' of the existing data on the effect of solute elements on the plastic properties of solid solutions reveal that our current knowledge on solid solution hardening is somewhat meager, inconsistent, and inconclusive. Many of the inconsistencies are undoubtedly attributable to the influence of unsuspected factors, such as purity; or uncontrolled factors, such as grain size, on the plastic properties of alloys. Nevertheless the following conclusions might be tentatively accepted: 1. Addition of solute elements invariably increases the yield strength, tensile strength, and hardness of the host element. 2. The rate of strain hardening, in general, increases with the concentration of the alloying element. 3. The strengthening effect in ternary alloys is the sum of the individual strengthening effects of the two solute elements as measured in their binary alloys. 4. The lattice strain is one factor that affects the strengthening of the alloy but it is not the only factor. 5. A second factor might be the difference in valence between the solute and solvent metals. All of the available evidence is in complete agreement with the first conclusion; the remaining conclusions, however, are not in agreement with all of the published data, but, in each case, the major weight of the existing evidence favors these deductions. Additional investigations will be required before most of these tentative conclusions can be accepted without reservation. In the following report an extensive investigation of the plastic properties of binary aluminum alloys is described. This work was undertaken in an attempt to shed more light on the general problem of solid solution hardening. Materials for Test: Aluminum was selected as the solvent metal for the present investigation on the effect of solute elements on the plastic properties of alloys. This choice was made for several reasons: (1) There appears to be little fundamental data in the published literature on the effect of solute elements on the properties of high-purity aluminum alloys. In view of the ever increasing economic importance of aluminum, such data would be of basic interest to the metallurgists concerned with the development of new aluminum alloys. (2) Aluminum is thought to be only partially ionized in the metallic state1' and consequently it might provide more complex relationships of the mechanical properties with the concentrations of the solute elements than more simple fully ionized solvents would reveal. (3) The data on aluminum alloys will provide a broader basis for correlations between the mechanical properties of metals in general and the concentration and atomic properties of the solute elements than is now available. Some complications, however, attend the selection of aluminum: The solubility of the various elements in the alpha aluminum phase are quite restricted, and not always well known. Consequently, only dilute solid solutions are available for study. This, however, may be somewhat advantageous because the dilute solution laws presumably are simpler than those applying to concentrated solutions. In addition, strain-hardened pure aluminum is known to recover at atmospheric temperatures. Very likely its alloys exhibit slower recovery rates. Thus, the secondary factor of effect of alloying elements on recovery might complicate the data. Such compli-
Jan 1, 1951
-
Geology-Its Application and Limitation in the Selection and Evaluation of Placer DepositsBy William H. Breeding
The remarks that follow are based substantially on experience covering 45 years, 80% of which has been in placer work, rather than on a review of available literature. Most commercial placers have been deposited by the action of water. The richer and more- difficult-to-mine placers are those in the headwater areas where gradients are steepest. The most lucrative placers are generally in inter- mediate areas where volumes are greater, fewer boulders are present, and gradients are from 3% to 1-1/2%. The higher volume, lower grade placers are in the lower reaches of river systems where gradients are lower. Where gold-bearing rivers have discharged into the sea, wave action can concentrate values on beaches, past and present. Most of the rich, readily accessible placers were mined by our forefathers. Current opportunities exist: (1) in remote areas where infrastructure has been absent in the past, or development has been prohibited by adverse ownership - political or commercial; (2) in deposits that could not be mined by equipment available to our forefathers; (3) in deposits unidentified by our forefathers; (4) where the-price-of-product/cost ratio is substantially better than in earlier years; or (5) a combination of those factors. When I entered the placer business in the late 1930s, and subsequently, a prevailing opinion believed that glacial deposits should be avoided as irregular in mineral content and composition, and unrewarding to explore and develop; yet an operator has been mining a fluvio-glacial deposit profitably for the past 17 years. Rich buried placer channels, of ten called paleo-channels were worked in the last century, generally by hand methods, and under conditions that would be unacceptable today. Exploration and mining equipment now available make some of these channels attractive targets. Well-known examples are in California and Australia. The formation of a commercial placer requires a source of valuable minerals. Above primary deposits, there may be eluvial deposits formed by the erosion of gangue minerals and the concentration "in situ" of valuable minerals. Down slope from these deposits are the hillside or colluvial deposits, and below them are the alluvial deposits of redeposited material. Most of the great placer fields of the world are the result of several generations of erosion and deposition. Well-known examples are in California and Colombia. Gold is a very resistant and malleable material, and gold placers may extend for 64 or 80 km (40 or 50 miles) along a river system. Platinum is less malleable, but is very resistant to disintegration. Diamonds are extremely hard, and (especially gem diamonds) may be found over great lengths of a river system. Cassiterite is less resistant to disintegration, and tin placers seldom extend over two miles without resupply from an additional source or sources of mineralizaton. Tungsten minerals are generally more friable, and within a few hundred yards of the source disintegrate to the point that they are uneconomical to recover. Rutile, ilmenite and zircon placers generally result from the weathering of massive deposits, and may be encountered over extensive areas; most are fine grained and durable. What does a geologist or mining engineer look for in placer exploration? The old adage to look for a mine near an existing mine is still valid. You need a source of valuable mineral. Then you require conditions for concentration, which means a satisfactory gradient and/or other conditions that will permit heavy minerals to settle. Nicely riffled gravel, often called a shingling of the bars, is conducive to placer formation. Coarser gravel is logically associated with coarser gold. Excessive clay and/or high stream velocities in narrow channels can carry gold far downstream and distribute it uncommercially over a large area. When material is extremely fine, in situ weathering and concentration become more important. Placers frequently occur distant from lode mines, and one must remember that in a larger watershed the exceptional floods that occur once in a hundred or a thousand years can move great quantities of material long distances. The carrying power of water is said to vary with the fifth or sixth power of its velocity. I am not ready to disagree with Waldemar Lindgren and accept that many commercial placers are substantially enriched by the chemical deposition of gold from solutions; however, I have seen crystalline gold in clayey material quite distant from known sources of primary gold that is dif-
Jan 1, 1985
-
Producing–Equipment, Methods and Materials - Burst Resistance of Pipe Cemented Into the EarthBy R. E. Zinkham, R. J. Goodwin
A mathematical study has been made of the amount of support a cement sheath could provide to casing cemented into the earth. Several assumptions were required to make the analysis, but only two of them are limiting: (I) the pipe must be completely surrounded with cement, and (2) any mud filter cake between the cement and formation has the same physical properties as either the cement or formation. The calculations showed that little support would be provided to the pipe before an unsupported cement sheath failed in tension; however, when the cement is confined between the pipe and wellbore and is loaded in compression, the pipe could receive a considerable amount of support. In fact, the theoretical results indicate the lower grades and larger sizes of pipe could have their working pressures doubled when reasonable compressive loads were applied to a surrounding cement sheath. These data are shown in six charts. Other down-hole conditions such as setting the cement under pressure, increased temperature and cement confinement all tend to increase the potential usefulness of the sheath. Because of size limitations, a laboratory program to verify the most important results of this mathematical study would be very difficult. However, small-scale field tests would be practicable. This paper shows that, if a solid cement sheath can be obtained in the field by either primary cementing or by repair after detection of flaws by surveys such as the new cement-bond logs, the use of this approach to reducing pipe costs merits further consideration. INTRODUCTION A modification in casing design practices is proposed which may either reduce the amount and grade of steel required to contain a specified internal pressure or permit the working pressure to be increased for a specified weight and grade of pipe. One of the more important considerations in casing design is its resistance to collapse; however, Bowers' and, more recently, O'Brien and Goins' have shown many casing programs are unnecessarily conservative in this respect, and they have indicated how savings can be made by designing for more realistic down-hole conditions. Earlier, Saye and Richardson howed that pipe costs could be reduced by considering the cement sheath as a part of the casing string when collapse resistance was being calculated. More recently, Rogers4 has raised the question as to whether a cement sheath might be considered in the design for burst resistance of the cemented casing. Calculations have been made for the increased burst resistance a cement sheath would provide for casing in a wellbore, and the results show that a sizable amount of support could be obtained in some instances. These data are presented in addition to a discussion of several other factors that are considered to affect the burst strength of pipe supported by cement. Two types of support are treated: Case I for tensile loading of the unconfined cement sheath, and Case for compressive loading of the confined cement sheath. ANALYTICAL TREATMENT AND RESULTS CASE I—TENSILE STRESSES IN AN UNCONFINED CEMENT SHEATH Conditions like this would most likely occur in a greatly enlarged portion of the hole where the cement was not in immediate contact with either the formation or a thin and hard mud cake. The mathematical analysis for this condition, as shown in the Appendix, rests on the following concepts. Pressure inside a unit length of pipe causes: (1) a tensile or tangential stress to be exerted over the longitudinal cross-sectional areas of the pipe and cement; and (2) an equal amount of strain in both the pipe and cement that is uniformly distributed over the wall thickness of each. This analysis was then used to make several calculations for a cement sheath around 51/2-in. OD pipe. The results are illustrated in Fig. 1, which shows that a tensile stress of 500 psi is imposed on a 5-in. thick sheath when the casing contains a pressure of only 1,450 psi. It also shows that a 10-in. thick sheath would be stressed to 500 psi in tension when the pipe contained a pressure of only 2,350 psi. Alternatively, if the stress analysis is made by means of the Lame thick-wall cylinder theory, the inner fibers of the 10-in. thick sheath will be stressed to 500 psi in tension when the pressure in the pipe is only 990 psi. This, of course, reveals that an unconfined sheath is of little support to the pipe in burst; however, an entirely different result is obtained when the cement is confined between the pipe and formation.
-
Institute of Metals Division - The Hot Ductility of NickelBy D. A. Kraai, S. Floreen
The effect of 1 to 50 ppm S on the ductility of nickel at 800° to 1400°F was studied. Results at each temperature showed a decrease in the reduction of area from approximately 95 to 5 pet over the sulfur range studied. Ductility varied with grain size, but only to a minor extent relative to the sulfiw effect. The effects of sulfur were completely offset by the addition of small amounts of magnesium. The results indicate that the "hot-short" loss in ductility is not an inherent property of nickel. Some possible mechanisms which cause the loss in ductility are described. MANY metals or alloys that normally possess high ductility exhibit a ductility loss at intermediate temperatures. This loss in ductility is often called "hot-shortness". Numerous examples of this phenomenon have been reported in the literature. Much of this work has been reviewed by McLean1 and by Rhines and Wray.2 To date there is no fully satisfactory explanation of the cause of this intermediate-temperature hot-shortness. It is generally recognized that impurities, and particularly impurities that form low-melting phases, can cause embrittlement. Examples of hot-shortness have been reported, however, where there were no obvious impurities present which would lower the ductility. Thus there has been some basis for believing that hot-shortness is an inherent property, and that even the purest metal would display a hot-short loss in ductility. This latter hypothesis was recently put forward by Rhines and wray2 based on studies of nickel and nickel alloys. In the discussion of this paper, however, Guard noted that high-purity nickel showed no hot-shortness.3 Thus there is reason to doubt whether pure nickel, or by inference any other pure metal, will inherently exhibit hot-shortness. The present work was initiated to determine the extent to which hot ductility was sensitive to very small amounts of an impurity element. If it could be demonstrated that hot-shortness could be induced by only minor amounts of an impurity, then it might be argued that hot-shortness in general is an impurity effect, and not a fundamental property of pure metals. The particular impurity studied was sulfur in nickel. The deleterious effects of sulfur are well- known. It is also well-known, and will be shown below, that additions of magnesium will render sulfur innocuous. When no such refining agents are added, however, the Ni-S system is a very useful one for studying the influence of small amounts of impurities. EXPERIMENTAL PROCEDURE Two heats containing -24 ppm S were vacuum-melted and small amounts of magnesium were then added under an argon atmosphere. These alloys were used to show the effectiveness of the normal magnesium treatment in overcoming the influence of sulfur. A second series of alloys with a sulfur range of 1 to 50 ppm was then prepared by vacuum melting nickel in alumina crucibles. No elements, such as magnesium, which tend to combine with sulfur were added. The higher sulfur contents were attained by adding nickel sulfide. Lower sulfur contents were prepared using a method in which the melt was oxidized under vacuum to produce the reaction S + 2O = SO2 These heats were subsequently deoxidized with carbon. Ten- to twenty-pound ingots were cast of all of the alloys studied. The compositions are given in Table I. The ingots were forged and hot-rolled to 3/4-in. bar. They were then annealed at either 2000" or 1600°F to produce different grain sizes. One-quarter-in.-diam tensile specimens were machined from the bars. These were tested at 800°, 1000o, 1200°, and 1400°F. The specimens were held at temperature approximately 45 min before testing. The strain rates were 0.005 min-1 to yielding, and 0.05 min-' after yielding. No extensometers or gage marks were placed on the specimens because the higher sulfur heats tended to fracture at the knife-edge indentations or gage marks. The properties measured were ultimate tensile strength and reduction of area. The analytical technique for determining sulfur at low levels was that developed by Burke and Davis.4 They reported a standard deviation of 1 ppm at an average sulfur level of 4 ppm in NBS standards. A standard deviation of 3 ppm is probably more realistic for the alloys used in this investigation considering the possibility of some segregation in the ingots. RESULTS A summary of the tensile results is given in Table I. As shown in the table, both heats to which
Jan 1, 1964
-
Institute of Metals Division - Role of the Binder Phase in Cemented Tungsten Carbide-Cobalt AlloysBy J. T. Norton, Joseph Gurland
IN spite of the extended use and high state of practical development of the cemented tungsten carbides, the structure of these alloys is still a matter of considerable controversy. The characteristic high rigidity and rupture strength of sintered compacts have been attributed to a continuous skeleton of tungsten carbide grains, formed during the sintering process. This view is based mainly on the work of Dawihl and Hinnuber,1 who reported that a sintered compact of 6 pct Co maintained its shape and some of its strength after the binder was leached out with boiling hydrochloric acid. After leaching, only 0.04 pct Co was reported to remain in the compact. They also showed that the assumed increasing discontinuity of such a skeleton, as the cobalt content is increased, could be made to account for the observed discontinuous increase of the coefficients of thermal expansion, the loss of rigidity, and the impaired cutting performance of alloys of more than 10 pct Co. Contradictory evidence was cited by Sanford and Trent,' who mentioned that a sintered compact was destroyed by reacting the binder with zinc and leaching out the resulting Zn-Co alloy. The skeleton theory also does not account for the observed change of strength of sintered compacts as a function of cobalt content. If the skeleton is responsible for the strength, the latter would be expected to decrease with increasing binder content. Actually, the strength increases and reaches a maximum around 20 pct Co. In addition, tungsten carbide is brittle and undoubtedly very notch sensitive. The highest value found in the literature for the transverse rupture strength of pure tungsten carbide prepared by sintering is 80,000 psi.3 herefore, such a skeleton does not easily account for a rupture-strength value of 300,000 psi and higher, commonly found in sint.ered tungsten carbide-cobalt compacts. In view of the conflicting data present in the literature, experiments were undertaken to determine whether the sintering of tungsten carbide-cobalt alloys leads to the formation of a carbide skeleton or whether the densification behavior and the properties of cemented compacts are consistent with a structure of isolated carbide grains in a matrix of binder metal. The specimens were prepared from powders of commercial grade. Tungsten carbide powder ranged in particle size from 0 to 5x10-4 cm. Mixtures of tungsten carbide and cobalt were ball milled in hexane for 48 hr in tungsten carbide lined mills. After milling, the specimens were pressed in a rectangular die (1x1/4x1/4 in.) at 16 tons per sq in. NO pressing lubricant was used. Sintering of the tungsten carbide-cobalt compacts was carried out in a vertical tube furnace equipped with a dilatometer (Fig. I), by means of which the change of length of the powder compacts could be followed from room temperature to 1500°C. An atmosphere of 20 pct H, 80 pct N was maintained inside the furnace. Decarburization of the samples was prevented by the presence of small rings of graphite inside the furnace tube. The temperature of the sample was measured by a platinum-platinum-rhodium thermocouple, which also was part of a temperature control system able to maintain a constant temperature within ±100C. Pure tungsten carbide compacts were prepared by sintering the carbide without binder or by evaporating the binder from sintered compacts in vacuum at 2000°C. Since complete densification of these samples was not desired, they were sintered only to 60 or 80 pct of the theoretical density of tungsten carbide. The specimens were prepared for metallographic examination by polishing with diamond powders and etching with a 10 pct solution of alkaline potassium ferricyanide. Cobalt etches light yellow and the carbide gray. The amount of porosity is exaggerated since it is difficult to avoid tearing out carbide particles, especially from incompletely sintered samples. Experimental Observations A number of specific experiments were carried out in order to study some particular aspect of the sintering problem. The details of these experiments, together with their results, are as follows: Electrolytic Leaching: The binder was removed by electrolytic leaching from sintered tungsten carbide-cobalt compacts for the purpose of determining the continuity of the carbide phase. The method used was based on the work of Cohen and coworkers4 on the electrolytic extraction of carbides from annealed steels. If the sample is made the anode, using a 10 pct hydrochloric acid solution as the electrolyte, the binder is dissolved, but the rate of solution of tungsten carbide is negligible. A current density of 0.2 amp per sq in. was applied. As shown in Fig.
Jan 1, 1953
-
Minerals Beneficiation - Application of Closed-Circuit TV to Conveyor and Mining OperationsBy G. H. Wilson
INTRODUCED in 1946 to serve a need in power-plant operation, closed-circuit TV has been used by well over 200 organizations in approximately 25 different industries. Known as industrial television, or simply ITV, it can be described as a private system wherein the television signal is restricted in distribution, usually by confinement within coaxial cable that directly connects the TV camera to one or several monitors, Figs. 1, 2. The picture is continuous and transmission is instantaneous, permitting an observer to see an operation that may be too distant, too inaccessible, or too dangerous to be viewed directly. Destructive testing or the machining of high explosives can now be conducted hundreds of feet away by personnel who still have close control through the eyes of the TV camera. It is also possible for one man to control operations formerly requiring the co-ordinated efforts of several workers. For example, at a large midwestern cement plant conveyance of limestone from primary crusher to raw mill and loading into five storage bins once necessitated the work of two men, one having little to do but prevent spilling of material by manually moving the tripper on the belt conveyor as occasion required. TV cameras mounted on the tripper now provide bin level indication to the conveyor operator at the crusher position so he is able to control the entire loading operation remotely, Fig. 3. By means of a switch, the picture from either camera is alternately available on a single viewer, or monitor, Fig. 4. Each camera is mounted on the tripper by means of a simple adjustable support and looks down into the bin, which is identified by the number of cross members on the vertical rod. Each associated power unit is located on a platform above the camera, Fig. 5. This centralized control by means of TV often has produced superior results, and in many instances saving in operating costs has been sufficient to write off equipment costs within six months to a year. Where a key portion of a process may be enclosed or otherwise inaccessible, TV again reduces the likelihood of mistakes and permits closer control by making available to the operator valuable information he might otherwise never possess. An example of this can be found at a strip mine where the coal seam lies 50 ft or more below the overburden, which is removed by a large wheel shovel. From his centrally located position the shove1 operator was unable to judge accurately to what extent the wheel buckets engaged the earth. His chief indication of efficiency was the amount of overburden on the belt conveyor as it passed his control point 75 ft from the wheel. Now, two television cameras mounted on the tip of the boom permit the operator to view the wheel from each side and provide him with a close-up view of the buckets so that he can take immediate and continuous advantage of their capacity, quickly compensating for ground irregularities and avoiding obstructions, Fig. 6. While the word television conjures up visions of highly complex and intricate apparatus such as that employed in modern TV studios and transmitting stations, the term industrial television should indicate compact, straightforward equipment. Most present-day ITV systems contain fewer than 25 tubes including camera and picture tubes. The average home television receiver alone requires at least that many tubes. Equipment like that illustrated in Fig. 1 contains only 17 tubes, of which 3 are in the camera. It can operate continuously and dependably, without protection, in any temperature from 0" to 150°F. It consumes less current than a toaster and weighs under 140 lb. Camera and monitor may be separated by 1500 to 2000 ft and by greater distance with additional amplification. This equipment is designed to withstand vibrations up to 21/16 in. and will operate successfully under more severe conditions of vibration and heat when suitable enclosures are provided. Any number of cameras may be switched to a single monitor, and any number of monitors, within reason, used simultaneously. Two types of applications in the mining industry have already been described. A third under serious consideration by several organizations will make use of ITV for remote observation of conveyor transfer points at copper concentrating plants so that evidence of belt breakdown and plugging of transfer chutes can be spotted immediately and costly overflow of material avoided. A television camera will soon be installed to view a trough conveyor near the exit of an iron-ore crusher to indicate clogging of the crusher as evidenced by reduction or absence of material on the
Jan 1, 1955
-
Uranium Ore Body Analysis Using The DFN TechniqueBy James K. Hallenburg
INTRODUCTION The delayed fission neutron, or DFN technique for uranium ore body analysis uses the first down-hole method for detecting uranium in place quantitatively. This technique detects the presence of and measures the amount of uranium in the formation. DFN TECHNIQUE DESCRIPTION The DFN technique depends upon inducing a fission reaction in the formation uranium with neutrons, resulting in an anomalous and quantitative return of neutrons from the uranium. Since there are no free, natural neutrons in formation, a good, low noise assessment may be made. There are several methods available for determining uranium quantity in situ. The method used by Century uses an electrical source of neutrons. This is a linear accelerator which bombards a tritium target with high velocity deuterium ions. The resulting reaction emits high energy neutrons which diffuse into the surrounding formation. They lose most of their energy until they come to thermal equilibrium with the formation. Upon encountering a fissile material, such as uranium, these thermal neutrons will react with the material. These reactions produce additional neutrons, the number of which is a function of the number of original neutrons and the amount of fissile material exposed. The particular source used, the linear accelerator, has several distinct advantages over other types of sources: 1. It can be turned off. Thus, it does not constitute a radioactive hazard when it is not in use. 2. It can be gated on in short bursts (6 to 8 microseconds). This results in measurements free of a high background of primary neutrons. 3. The output can be controlled. Thus, the neutron output can be made the same in a number of tools, easily and automatically. There are several interesting reactions which take place during the lifetime of the neutrons around the source. During the slowing down or moderating process the neutron can react with several elements. One of these is oxygen 17. This results in a background level of neutrons in any of the measurements which must be accounted for in any interpretation technique. These elements are usually uninteresting economically. The high energy neutrons will also react with uranium 238. However, the proportions of uranium 235 and 238 are nearly constant. Therefore, this reaction aids detection of uranium mineral and need not be seperated out. Upon reaching thermal energy the neutrons will react with any fissile material, uranium 235, uranium 234, and thorium 232. At present, we do not have good techniques for seperating out the reaction products of uranium 234 and thorium 232. However, uranium 234 is a small (.0055%) percentage of the uranium mineral and thorium 232 is usually not present in sedimentary deposits. When the uranium 235 reacts with thermal neutrons it breaks into two or more fragments and some neutrons. This occurs within a few microseconds after the primary neutrons have moderated and is the prompt reaction. One system uses this; the PFN or prompt fission neutron technique. We don't use this method because the neutron population is low and, therefore, the signal is small and difficult to work with, accurately. Within a few microseconds to several seconds the fission fragments also decay with the emmission of additional neutrons. Now, with a long time period available and a large neutron population we gate off the generator and measure the delayed fission neutrons after a waiting period. These neutrons can be a measure of the amount of uranium present around the probe. Thermal neutrons are detected with the DFN technique instead of capture gamma rays to avoid some of the returns from other elements than uranium. LOGGING TECHNIQUE The exact logging technique will depend, to some extent, upon the purpose of the measurement. However, the general technique is to first run the standard logs. These will include: 1. The gamma ray log for initial evaluation of the mineral body and for determining the position of the borehole within the mineral body, 2. The resistance or resistivity log for determining the formation quality, lithology, and porosity. 3. The S. P. curve for estimating the redox state and shale content, and measuring formation water salinity, 4. The hole deviation for locating the position, depth, and thickness of the mineral (and other formations), and 5. The neutron porosity curve. The neutron porosity curve is most important to the interpretation of the DFN readings. The neutrons from this tool are affected in the same way by bore hole and formation fluids as the DFN neutrons are. Therefore, we can use this curve to determine effect of the oxygen 17 in the water. Of course, this curve can be used to determine formation porosity. It can also be used to calculate formation density.
Jan 1, 1979
-
Economy of Electricity over Steam for Power Purposes in and about MinesBy R. E. Hobart
THE development of the Hauto power plant and the claims made by various engineers that electricity was more economical than steam for power purposes in and about the mines; led the Lehigh Coal and Navigation Co., in 1911, to conduct a test to ascertain the consumption of power used by a large steam hoisting engine. The question being one on which no reliable information could be found, and the opinion of various engineers differing to such an extent, it was decided that a test under actual operating conditions was necessary. This was arranged for at one of the collieries. The engine selected was a 30 by 60-in. (76.2 by 152.4-cm.) piston-valve engine of modern type, and comparatively new. The boiler plant from which the hoist received its power was about 600 ft. (182 m.) from the engine, the latter being fed by a 10-in. (25.4-cm.) steam line insulated with magnesia pipe covering. Two batteries of the boiler plant, aggregating 1200 hp., were cut off and fed into a separate steam line leading direct to the hoist engine. Steam auxiliaries, consisting of feed-water pump and blowers, were fed by the boilers in test, and their consumption charged against the hoist. A barrel weighing device was installed, as it was felt that this would be the most accurate way of determining the consumption of water. The fuel, No. 3 buckwheat, was carefully weighed, and every precaution was taken to make the test accurate in every particular. The test was run for 1 week, or a total of 168 hr. A record was kept of the number of trips hoisted or lowered, and continuous indicator cards were taken. One particular set of cards was taken with the hoist operating balanced, and also with the hoist operating with no counterweight other than the empty cage, the coal in the car being weighed. The speed of the hoist was taken by means, of a graphic recording instrument which registered the number of revolutions every 5 sec. From this record, speed-time curves were plotted.
Jan 2, 1918